Test Report: KVM_Linux_crio 20242

                    
                      454e3a8af9229d80194750b761a4b9142724e045:2025-01-20:37993
                    
                

Test fail (13/311)

x
+
TestAddons/parallel/Ingress (154.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-917221 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-917221 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-917221 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [55dfbb36-858b-400d-a26c-3659038532ba] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [55dfbb36-858b-400d-a26c-3659038532ba] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004643956s
I0120 12:53:31.224946 1927672 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-917221 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.318898042s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-917221 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.225
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-917221 -n addons-917221
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-917221 logs -n 25: (1.380620478s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-454309                                                                     | download-only-454309 | jenkins | v1.35.0 | 20 Jan 25 12:50 UTC | 20 Jan 25 12:50 UTC |
	| delete  | -p download-only-567505                                                                     | download-only-567505 | jenkins | v1.35.0 | 20 Jan 25 12:50 UTC | 20 Jan 25 12:50 UTC |
	| delete  | -p download-only-454309                                                                     | download-only-454309 | jenkins | v1.35.0 | 20 Jan 25 12:50 UTC | 20 Jan 25 12:50 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-946086 | jenkins | v1.35.0 | 20 Jan 25 12:50 UTC |                     |
	|         | binary-mirror-946086                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35627                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-946086                                                                     | binary-mirror-946086 | jenkins | v1.35.0 | 20 Jan 25 12:50 UTC | 20 Jan 25 12:50 UTC |
	| addons  | enable dashboard -p                                                                         | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:50 UTC |                     |
	|         | addons-917221                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:50 UTC |                     |
	|         | addons-917221                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-917221 --wait=true                                                                | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:50 UTC | 20 Jan 25 12:52 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-917221 addons disable                                                                | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:52 UTC | 20 Jan 25 12:52 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-917221 addons disable                                                                | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:52 UTC | 20 Jan 25 12:52 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:52 UTC | 20 Jan 25 12:52 UTC |
	|         | -p addons-917221                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-917221 addons                                                                        | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:52 UTC | 20 Jan 25 12:52 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-917221 addons                                                                        | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:53 UTC | 20 Jan 25 12:53 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-917221 addons disable                                                                | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:53 UTC | 20 Jan 25 12:53 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-917221 ip                                                                            | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:53 UTC | 20 Jan 25 12:53 UTC |
	| addons  | addons-917221 addons disable                                                                | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:53 UTC | 20 Jan 25 12:53 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-917221 addons disable                                                                | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:53 UTC | 20 Jan 25 12:53 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-917221 addons                                                                        | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:53 UTC | 20 Jan 25 12:53 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-917221 ssh cat                                                                       | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:53 UTC | 20 Jan 25 12:53 UTC |
	|         | /opt/local-path-provisioner/pvc-36fb8dda-e079-4084-a36d-f8edd1c96d8a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-917221 addons disable                                                                | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:53 UTC | 20 Jan 25 12:53 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-917221 addons                                                                        | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:53 UTC | 20 Jan 25 12:53 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-917221 ssh curl -s                                                                   | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:53 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-917221 addons                                                                        | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:53 UTC | 20 Jan 25 12:53 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-917221 addons                                                                        | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:53 UTC | 20 Jan 25 12:53 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-917221 ip                                                                            | addons-917221        | jenkins | v1.35.0 | 20 Jan 25 12:55 UTC | 20 Jan 25 12:55 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:50:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:50:22.341323 1928285 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:50:22.341435 1928285 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:50:22.341443 1928285 out.go:358] Setting ErrFile to fd 2...
	I0120 12:50:22.341448 1928285 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:50:22.341666 1928285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 12:50:22.342423 1928285 out.go:352] Setting JSON to false
	I0120 12:50:22.343609 1928285 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":16368,"bootTime":1737361054,"procs":404,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:50:22.343729 1928285 start.go:139] virtualization: kvm guest
	I0120 12:50:22.345983 1928285 out.go:177] * [addons-917221] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:50:22.347488 1928285 notify.go:220] Checking for updates...
	I0120 12:50:22.347507 1928285 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 12:50:22.349213 1928285 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:50:22.350666 1928285 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 12:50:22.351987 1928285 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 12:50:22.353490 1928285 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 12:50:22.355262 1928285 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 12:50:22.356968 1928285 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:50:22.391603 1928285 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 12:50:22.393063 1928285 start.go:297] selected driver: kvm2
	I0120 12:50:22.393077 1928285 start.go:901] validating driver "kvm2" against <nil>
	I0120 12:50:22.393102 1928285 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 12:50:22.393891 1928285 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:50:22.394000 1928285 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-1920423/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:50:22.410394 1928285 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:50:22.410460 1928285 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 12:50:22.410793 1928285 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:50:22.410831 1928285 cni.go:84] Creating CNI manager for ""
	I0120 12:50:22.410889 1928285 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:50:22.410899 1928285 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 12:50:22.410953 1928285 start.go:340] cluster config:
	{Name:addons-917221 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-917221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0120 12:50:22.411078 1928285 iso.go:125] acquiring lock: {Name:mk18c81b0efa5cc8efe9d47dc52684752f206ae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:50:22.413352 1928285 out.go:177] * Starting "addons-917221" primary control-plane node in "addons-917221" cluster
	I0120 12:50:22.414933 1928285 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:50:22.415011 1928285 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 12:50:22.415027 1928285 cache.go:56] Caching tarball of preloaded images
	I0120 12:50:22.415132 1928285 preload.go:172] Found /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 12:50:22.415145 1928285 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 12:50:22.415564 1928285 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/config.json ...
	I0120 12:50:22.415591 1928285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/config.json: {Name:mk5a2edefe295e7313b0883cb832e6041bc8dd29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:50:22.415761 1928285 start.go:360] acquireMachinesLock for addons-917221: {Name:mkbf148c6fec0c722eed081be3cef9d0990100c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 12:50:22.415825 1928285 start.go:364] duration metric: took 46.748µs to acquireMachinesLock for "addons-917221"
	I0120 12:50:22.415852 1928285 start.go:93] Provisioning new machine with config: &{Name:addons-917221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-917221 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:50:22.415953 1928285 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 12:50:22.417870 1928285 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0120 12:50:22.418036 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:50:22.418095 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:50:22.433576 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37545
	I0120 12:50:22.434100 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:50:22.434750 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:50:22.434774 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:50:22.435155 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:50:22.435434 1928285 main.go:141] libmachine: (addons-917221) Calling .GetMachineName
	I0120 12:50:22.435621 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:50:22.435801 1928285 start.go:159] libmachine.API.Create for "addons-917221" (driver="kvm2")
	I0120 12:50:22.435834 1928285 client.go:168] LocalClient.Create starting
	I0120 12:50:22.435889 1928285 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem
	I0120 12:50:22.684069 1928285 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem
	I0120 12:50:22.835887 1928285 main.go:141] libmachine: Running pre-create checks...
	I0120 12:50:22.835956 1928285 main.go:141] libmachine: (addons-917221) Calling .PreCreateCheck
	I0120 12:50:22.836567 1928285 main.go:141] libmachine: (addons-917221) Calling .GetConfigRaw
	I0120 12:50:22.837061 1928285 main.go:141] libmachine: Creating machine...
	I0120 12:50:22.837076 1928285 main.go:141] libmachine: (addons-917221) Calling .Create
	I0120 12:50:22.837249 1928285 main.go:141] libmachine: (addons-917221) creating KVM machine...
	I0120 12:50:22.837271 1928285 main.go:141] libmachine: (addons-917221) creating network...
	I0120 12:50:22.838673 1928285 main.go:141] libmachine: (addons-917221) DBG | found existing default KVM network
	I0120 12:50:22.839475 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:22.839285 1928307 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I0120 12:50:22.839498 1928285 main.go:141] libmachine: (addons-917221) DBG | created network xml: 
	I0120 12:50:22.839511 1928285 main.go:141] libmachine: (addons-917221) DBG | <network>
	I0120 12:50:22.839518 1928285 main.go:141] libmachine: (addons-917221) DBG |   <name>mk-addons-917221</name>
	I0120 12:50:22.839523 1928285 main.go:141] libmachine: (addons-917221) DBG |   <dns enable='no'/>
	I0120 12:50:22.839534 1928285 main.go:141] libmachine: (addons-917221) DBG |   
	I0120 12:50:22.839543 1928285 main.go:141] libmachine: (addons-917221) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0120 12:50:22.839554 1928285 main.go:141] libmachine: (addons-917221) DBG |     <dhcp>
	I0120 12:50:22.839562 1928285 main.go:141] libmachine: (addons-917221) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0120 12:50:22.839566 1928285 main.go:141] libmachine: (addons-917221) DBG |     </dhcp>
	I0120 12:50:22.839571 1928285 main.go:141] libmachine: (addons-917221) DBG |   </ip>
	I0120 12:50:22.839575 1928285 main.go:141] libmachine: (addons-917221) DBG |   
	I0120 12:50:22.839580 1928285 main.go:141] libmachine: (addons-917221) DBG | </network>
	I0120 12:50:22.839593 1928285 main.go:141] libmachine: (addons-917221) DBG | 
	I0120 12:50:22.845431 1928285 main.go:141] libmachine: (addons-917221) DBG | trying to create private KVM network mk-addons-917221 192.168.39.0/24...
	I0120 12:50:22.917804 1928285 main.go:141] libmachine: (addons-917221) DBG | private KVM network mk-addons-917221 192.168.39.0/24 created
	I0120 12:50:22.917852 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:22.917780 1928307 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 12:50:22.917869 1928285 main.go:141] libmachine: (addons-917221) setting up store path in /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221 ...
	I0120 12:50:22.917888 1928285 main.go:141] libmachine: (addons-917221) building disk image from file:///home/jenkins/minikube-integration/20242-1920423/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 12:50:22.917928 1928285 main.go:141] libmachine: (addons-917221) Downloading /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20242-1920423/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 12:50:23.203381 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:23.203187 1928307 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa...
	I0120 12:50:23.257414 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:23.257247 1928307 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/addons-917221.rawdisk...
	I0120 12:50:23.257448 1928285 main.go:141] libmachine: (addons-917221) DBG | Writing magic tar header
	I0120 12:50:23.257459 1928285 main.go:141] libmachine: (addons-917221) DBG | Writing SSH key tar header
	I0120 12:50:23.257466 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:23.257374 1928307 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221 ...
	I0120 12:50:23.257477 1928285 main.go:141] libmachine: (addons-917221) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221
	I0120 12:50:23.257483 1928285 main.go:141] libmachine: (addons-917221) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines
	I0120 12:50:23.257618 1928285 main.go:141] libmachine: (addons-917221) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221 (perms=drwx------)
	I0120 12:50:23.257663 1928285 main.go:141] libmachine: (addons-917221) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423/.minikube/machines (perms=drwxr-xr-x)
	I0120 12:50:23.257679 1928285 main.go:141] libmachine: (addons-917221) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 12:50:23.257721 1928285 main.go:141] libmachine: (addons-917221) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423/.minikube (perms=drwxr-xr-x)
	I0120 12:50:23.257749 1928285 main.go:141] libmachine: (addons-917221) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423 (perms=drwxrwxr-x)
	I0120 12:50:23.257758 1928285 main.go:141] libmachine: (addons-917221) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423
	I0120 12:50:23.257771 1928285 main.go:141] libmachine: (addons-917221) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 12:50:23.257778 1928285 main.go:141] libmachine: (addons-917221) DBG | checking permissions on dir: /home/jenkins
	I0120 12:50:23.257789 1928285 main.go:141] libmachine: (addons-917221) DBG | checking permissions on dir: /home
	I0120 12:50:23.257799 1928285 main.go:141] libmachine: (addons-917221) DBG | skipping /home - not owner
	I0120 12:50:23.257808 1928285 main.go:141] libmachine: (addons-917221) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 12:50:23.257819 1928285 main.go:141] libmachine: (addons-917221) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 12:50:23.257828 1928285 main.go:141] libmachine: (addons-917221) creating domain...
	I0120 12:50:23.258868 1928285 main.go:141] libmachine: (addons-917221) define libvirt domain using xml: 
	I0120 12:50:23.258891 1928285 main.go:141] libmachine: (addons-917221) <domain type='kvm'>
	I0120 12:50:23.258902 1928285 main.go:141] libmachine: (addons-917221)   <name>addons-917221</name>
	I0120 12:50:23.258909 1928285 main.go:141] libmachine: (addons-917221)   <memory unit='MiB'>4000</memory>
	I0120 12:50:23.258917 1928285 main.go:141] libmachine: (addons-917221)   <vcpu>2</vcpu>
	I0120 12:50:23.258924 1928285 main.go:141] libmachine: (addons-917221)   <features>
	I0120 12:50:23.258952 1928285 main.go:141] libmachine: (addons-917221)     <acpi/>
	I0120 12:50:23.258972 1928285 main.go:141] libmachine: (addons-917221)     <apic/>
	I0120 12:50:23.258985 1928285 main.go:141] libmachine: (addons-917221)     <pae/>
	I0120 12:50:23.258994 1928285 main.go:141] libmachine: (addons-917221)     
	I0120 12:50:23.259047 1928285 main.go:141] libmachine: (addons-917221)   </features>
	I0120 12:50:23.259074 1928285 main.go:141] libmachine: (addons-917221)   <cpu mode='host-passthrough'>
	I0120 12:50:23.259087 1928285 main.go:141] libmachine: (addons-917221)   
	I0120 12:50:23.259106 1928285 main.go:141] libmachine: (addons-917221)   </cpu>
	I0120 12:50:23.259116 1928285 main.go:141] libmachine: (addons-917221)   <os>
	I0120 12:50:23.259122 1928285 main.go:141] libmachine: (addons-917221)     <type>hvm</type>
	I0120 12:50:23.259134 1928285 main.go:141] libmachine: (addons-917221)     <boot dev='cdrom'/>
	I0120 12:50:23.259149 1928285 main.go:141] libmachine: (addons-917221)     <boot dev='hd'/>
	I0120 12:50:23.259161 1928285 main.go:141] libmachine: (addons-917221)     <bootmenu enable='no'/>
	I0120 12:50:23.259171 1928285 main.go:141] libmachine: (addons-917221)   </os>
	I0120 12:50:23.259178 1928285 main.go:141] libmachine: (addons-917221)   <devices>
	I0120 12:50:23.259189 1928285 main.go:141] libmachine: (addons-917221)     <disk type='file' device='cdrom'>
	I0120 12:50:23.259202 1928285 main.go:141] libmachine: (addons-917221)       <source file='/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/boot2docker.iso'/>
	I0120 12:50:23.259213 1928285 main.go:141] libmachine: (addons-917221)       <target dev='hdc' bus='scsi'/>
	I0120 12:50:23.259255 1928285 main.go:141] libmachine: (addons-917221)       <readonly/>
	I0120 12:50:23.259281 1928285 main.go:141] libmachine: (addons-917221)     </disk>
	I0120 12:50:23.259297 1928285 main.go:141] libmachine: (addons-917221)     <disk type='file' device='disk'>
	I0120 12:50:23.259310 1928285 main.go:141] libmachine: (addons-917221)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 12:50:23.259328 1928285 main.go:141] libmachine: (addons-917221)       <source file='/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/addons-917221.rawdisk'/>
	I0120 12:50:23.259340 1928285 main.go:141] libmachine: (addons-917221)       <target dev='hda' bus='virtio'/>
	I0120 12:50:23.259353 1928285 main.go:141] libmachine: (addons-917221)     </disk>
	I0120 12:50:23.259363 1928285 main.go:141] libmachine: (addons-917221)     <interface type='network'>
	I0120 12:50:23.259374 1928285 main.go:141] libmachine: (addons-917221)       <source network='mk-addons-917221'/>
	I0120 12:50:23.259385 1928285 main.go:141] libmachine: (addons-917221)       <model type='virtio'/>
	I0120 12:50:23.259397 1928285 main.go:141] libmachine: (addons-917221)     </interface>
	I0120 12:50:23.259409 1928285 main.go:141] libmachine: (addons-917221)     <interface type='network'>
	I0120 12:50:23.259419 1928285 main.go:141] libmachine: (addons-917221)       <source network='default'/>
	I0120 12:50:23.259430 1928285 main.go:141] libmachine: (addons-917221)       <model type='virtio'/>
	I0120 12:50:23.259443 1928285 main.go:141] libmachine: (addons-917221)     </interface>
	I0120 12:50:23.259456 1928285 main.go:141] libmachine: (addons-917221)     <serial type='pty'>
	I0120 12:50:23.259465 1928285 main.go:141] libmachine: (addons-917221)       <target port='0'/>
	I0120 12:50:23.259475 1928285 main.go:141] libmachine: (addons-917221)     </serial>
	I0120 12:50:23.259490 1928285 main.go:141] libmachine: (addons-917221)     <console type='pty'>
	I0120 12:50:23.259501 1928285 main.go:141] libmachine: (addons-917221)       <target type='serial' port='0'/>
	I0120 12:50:23.259510 1928285 main.go:141] libmachine: (addons-917221)     </console>
	I0120 12:50:23.259520 1928285 main.go:141] libmachine: (addons-917221)     <rng model='virtio'>
	I0120 12:50:23.259539 1928285 main.go:141] libmachine: (addons-917221)       <backend model='random'>/dev/random</backend>
	I0120 12:50:23.259556 1928285 main.go:141] libmachine: (addons-917221)     </rng>
	I0120 12:50:23.259568 1928285 main.go:141] libmachine: (addons-917221)     
	I0120 12:50:23.259577 1928285 main.go:141] libmachine: (addons-917221)     
	I0120 12:50:23.259588 1928285 main.go:141] libmachine: (addons-917221)   </devices>
	I0120 12:50:23.259597 1928285 main.go:141] libmachine: (addons-917221) </domain>
	I0120 12:50:23.259610 1928285 main.go:141] libmachine: (addons-917221) 
	I0120 12:50:23.266701 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:09:ef:87 in network default
	I0120 12:50:23.267269 1928285 main.go:141] libmachine: (addons-917221) starting domain...
	I0120 12:50:23.267285 1928285 main.go:141] libmachine: (addons-917221) ensuring networks are active...
	I0120 12:50:23.267293 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:23.268021 1928285 main.go:141] libmachine: (addons-917221) Ensuring network default is active
	I0120 12:50:23.268378 1928285 main.go:141] libmachine: (addons-917221) Ensuring network mk-addons-917221 is active
	I0120 12:50:23.268905 1928285 main.go:141] libmachine: (addons-917221) getting domain XML...
	I0120 12:50:23.269528 1928285 main.go:141] libmachine: (addons-917221) creating domain...
	I0120 12:50:24.771359 1928285 main.go:141] libmachine: (addons-917221) waiting for IP...
	I0120 12:50:24.772282 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:24.772772 1928285 main.go:141] libmachine: (addons-917221) DBG | unable to find current IP address of domain addons-917221 in network mk-addons-917221
	I0120 12:50:24.772848 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:24.772781 1928307 retry.go:31] will retry after 211.839723ms: waiting for domain to come up
	I0120 12:50:24.986290 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:24.986768 1928285 main.go:141] libmachine: (addons-917221) DBG | unable to find current IP address of domain addons-917221 in network mk-addons-917221
	I0120 12:50:24.986826 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:24.986763 1928307 retry.go:31] will retry after 330.806676ms: waiting for domain to come up
	I0120 12:50:25.319446 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:25.319915 1928285 main.go:141] libmachine: (addons-917221) DBG | unable to find current IP address of domain addons-917221 in network mk-addons-917221
	I0120 12:50:25.319948 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:25.319881 1928307 retry.go:31] will retry after 453.132675ms: waiting for domain to come up
	I0120 12:50:25.774472 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:25.774965 1928285 main.go:141] libmachine: (addons-917221) DBG | unable to find current IP address of domain addons-917221 in network mk-addons-917221
	I0120 12:50:25.775006 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:25.774942 1928307 retry.go:31] will retry after 511.789125ms: waiting for domain to come up
	I0120 12:50:26.288736 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:26.289191 1928285 main.go:141] libmachine: (addons-917221) DBG | unable to find current IP address of domain addons-917221 in network mk-addons-917221
	I0120 12:50:26.289219 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:26.289159 1928307 retry.go:31] will retry after 624.671311ms: waiting for domain to come up
	I0120 12:50:26.915464 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:26.915951 1928285 main.go:141] libmachine: (addons-917221) DBG | unable to find current IP address of domain addons-917221 in network mk-addons-917221
	I0120 12:50:26.916003 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:26.915950 1928307 retry.go:31] will retry after 724.449197ms: waiting for domain to come up
	I0120 12:50:27.641790 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:27.642252 1928285 main.go:141] libmachine: (addons-917221) DBG | unable to find current IP address of domain addons-917221 in network mk-addons-917221
	I0120 12:50:27.642280 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:27.642196 1928307 retry.go:31] will retry after 732.861763ms: waiting for domain to come up
	I0120 12:50:28.377217 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:28.377660 1928285 main.go:141] libmachine: (addons-917221) DBG | unable to find current IP address of domain addons-917221 in network mk-addons-917221
	I0120 12:50:28.377694 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:28.377606 1928307 retry.go:31] will retry after 1.443595343s: waiting for domain to come up
	I0120 12:50:29.823264 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:29.823555 1928285 main.go:141] libmachine: (addons-917221) DBG | unable to find current IP address of domain addons-917221 in network mk-addons-917221
	I0120 12:50:29.823622 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:29.823544 1928307 retry.go:31] will retry after 1.724687684s: waiting for domain to come up
	I0120 12:50:31.550435 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:31.550944 1928285 main.go:141] libmachine: (addons-917221) DBG | unable to find current IP address of domain addons-917221 in network mk-addons-917221
	I0120 12:50:31.550976 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:31.550886 1928307 retry.go:31] will retry after 1.916878967s: waiting for domain to come up
	I0120 12:50:33.469535 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:33.469895 1928285 main.go:141] libmachine: (addons-917221) DBG | unable to find current IP address of domain addons-917221 in network mk-addons-917221
	I0120 12:50:33.469944 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:33.469883 1928307 retry.go:31] will retry after 2.414266461s: waiting for domain to come up
	I0120 12:50:35.885802 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:35.886353 1928285 main.go:141] libmachine: (addons-917221) DBG | unable to find current IP address of domain addons-917221 in network mk-addons-917221
	I0120 12:50:35.886389 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:35.886328 1928307 retry.go:31] will retry after 3.149609303s: waiting for domain to come up
	I0120 12:50:39.038449 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:39.038897 1928285 main.go:141] libmachine: (addons-917221) DBG | unable to find current IP address of domain addons-917221 in network mk-addons-917221
	I0120 12:50:39.038931 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:39.038866 1928307 retry.go:31] will retry after 3.438282265s: waiting for domain to come up
	I0120 12:50:42.479399 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:42.479944 1928285 main.go:141] libmachine: (addons-917221) DBG | unable to find current IP address of domain addons-917221 in network mk-addons-917221
	I0120 12:50:42.479977 1928285 main.go:141] libmachine: (addons-917221) DBG | I0120 12:50:42.479884 1928307 retry.go:31] will retry after 4.847209877s: waiting for domain to come up
	I0120 12:50:47.332963 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:47.333378 1928285 main.go:141] libmachine: (addons-917221) found domain IP: 192.168.39.225
	I0120 12:50:47.333402 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has current primary IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:47.333418 1928285 main.go:141] libmachine: (addons-917221) reserving static IP address...
	I0120 12:50:47.333932 1928285 main.go:141] libmachine: (addons-917221) DBG | unable to find host DHCP lease matching {name: "addons-917221", mac: "52:54:00:68:bf:78", ip: "192.168.39.225"} in network mk-addons-917221
	I0120 12:50:47.412572 1928285 main.go:141] libmachine: (addons-917221) reserved static IP address 192.168.39.225 for domain addons-917221
	I0120 12:50:47.412654 1928285 main.go:141] libmachine: (addons-917221) DBG | Getting to WaitForSSH function...
	I0120 12:50:47.412670 1928285 main.go:141] libmachine: (addons-917221) waiting for SSH...
	I0120 12:50:47.415495 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:47.415941 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:47.415971 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:47.416157 1928285 main.go:141] libmachine: (addons-917221) DBG | Using SSH client type: external
	I0120 12:50:47.416182 1928285 main.go:141] libmachine: (addons-917221) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa (-rw-------)
	I0120 12:50:47.416228 1928285 main.go:141] libmachine: (addons-917221) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 12:50:47.416244 1928285 main.go:141] libmachine: (addons-917221) DBG | About to run SSH command:
	I0120 12:50:47.416257 1928285 main.go:141] libmachine: (addons-917221) DBG | exit 0
	I0120 12:50:47.543042 1928285 main.go:141] libmachine: (addons-917221) DBG | SSH cmd err, output: <nil>: 
	I0120 12:50:47.543373 1928285 main.go:141] libmachine: (addons-917221) KVM machine creation complete
	I0120 12:50:47.543770 1928285 main.go:141] libmachine: (addons-917221) Calling .GetConfigRaw
	I0120 12:50:47.544360 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:50:47.544564 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:50:47.544699 1928285 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 12:50:47.544710 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:50:47.545917 1928285 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 12:50:47.545932 1928285 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 12:50:47.545938 1928285 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 12:50:47.545945 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:50:47.548372 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:47.548693 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:47.548722 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:47.548922 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:50:47.549150 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:50:47.549329 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:50:47.549459 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:50:47.549635 1928285 main.go:141] libmachine: Using SSH client type: native
	I0120 12:50:47.549848 1928285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0120 12:50:47.549874 1928285 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 12:50:47.658131 1928285 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:50:47.658159 1928285 main.go:141] libmachine: Detecting the provisioner...
	I0120 12:50:47.658167 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:50:47.661094 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:47.661429 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:47.661464 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:47.661643 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:50:47.661862 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:50:47.662132 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:50:47.662270 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:50:47.662449 1928285 main.go:141] libmachine: Using SSH client type: native
	I0120 12:50:47.662684 1928285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0120 12:50:47.662699 1928285 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 12:50:47.771680 1928285 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 12:50:47.771773 1928285 main.go:141] libmachine: found compatible host: buildroot
	I0120 12:50:47.771784 1928285 main.go:141] libmachine: Provisioning with buildroot...
	I0120 12:50:47.771794 1928285 main.go:141] libmachine: (addons-917221) Calling .GetMachineName
	I0120 12:50:47.772101 1928285 buildroot.go:166] provisioning hostname "addons-917221"
	I0120 12:50:47.772137 1928285 main.go:141] libmachine: (addons-917221) Calling .GetMachineName
	I0120 12:50:47.772325 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:50:47.775295 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:47.775707 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:47.775740 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:47.775867 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:50:47.776042 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:50:47.776215 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:50:47.776376 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:50:47.776580 1928285 main.go:141] libmachine: Using SSH client type: native
	I0120 12:50:47.776808 1928285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0120 12:50:47.776823 1928285 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-917221 && echo "addons-917221" | sudo tee /etc/hostname
	I0120 12:50:47.902103 1928285 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-917221
	
	I0120 12:50:47.902135 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:50:47.904697 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:47.905015 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:47.905060 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:47.905192 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:50:47.905408 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:50:47.905595 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:50:47.905745 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:50:47.905895 1928285 main.go:141] libmachine: Using SSH client type: native
	I0120 12:50:47.906123 1928285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0120 12:50:47.906145 1928285 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-917221' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-917221/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-917221' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 12:50:48.023851 1928285 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 12:50:48.023890 1928285 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 12:50:48.023929 1928285 buildroot.go:174] setting up certificates
	I0120 12:50:48.023942 1928285 provision.go:84] configureAuth start
	I0120 12:50:48.023954 1928285 main.go:141] libmachine: (addons-917221) Calling .GetMachineName
	I0120 12:50:48.024297 1928285 main.go:141] libmachine: (addons-917221) Calling .GetIP
	I0120 12:50:48.026892 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.027440 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:48.027471 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.027692 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:50:48.030254 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.030649 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:48.030685 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.030809 1928285 provision.go:143] copyHostCerts
	I0120 12:50:48.030881 1928285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 12:50:48.031049 1928285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 12:50:48.031151 1928285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 12:50:48.031205 1928285 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.addons-917221 san=[127.0.0.1 192.168.39.225 addons-917221 localhost minikube]
	I0120 12:50:48.145491 1928285 provision.go:177] copyRemoteCerts
	I0120 12:50:48.145582 1928285 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 12:50:48.145615 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:50:48.148821 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.149188 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:48.149215 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.149412 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:50:48.149636 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:50:48.149781 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:50:48.149936 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:50:48.234474 1928285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 12:50:48.266560 1928285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 12:50:48.291749 1928285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0120 12:50:48.316617 1928285 provision.go:87] duration metric: took 292.656027ms to configureAuth
	I0120 12:50:48.316661 1928285 buildroot.go:189] setting minikube options for container-runtime
	I0120 12:50:48.316871 1928285 config.go:182] Loaded profile config "addons-917221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:50:48.316964 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:50:48.319808 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.320163 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:48.320183 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.320426 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:50:48.320653 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:50:48.320841 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:50:48.320959 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:50:48.321127 1928285 main.go:141] libmachine: Using SSH client type: native
	I0120 12:50:48.321354 1928285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0120 12:50:48.321376 1928285 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 12:50:48.554414 1928285 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 12:50:48.554483 1928285 main.go:141] libmachine: Checking connection to Docker...
	I0120 12:50:48.554497 1928285 main.go:141] libmachine: (addons-917221) Calling .GetURL
	I0120 12:50:48.555885 1928285 main.go:141] libmachine: (addons-917221) DBG | using libvirt version 6000000
	I0120 12:50:48.558200 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.558584 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:48.558655 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.558776 1928285 main.go:141] libmachine: Docker is up and running!
	I0120 12:50:48.558791 1928285 main.go:141] libmachine: Reticulating splines...
	I0120 12:50:48.558799 1928285 client.go:171] duration metric: took 26.122952271s to LocalClient.Create
	I0120 12:50:48.558824 1928285 start.go:167] duration metric: took 26.123024556s to libmachine.API.Create "addons-917221"
	I0120 12:50:48.558834 1928285 start.go:293] postStartSetup for "addons-917221" (driver="kvm2")
	I0120 12:50:48.558844 1928285 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 12:50:48.558871 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:50:48.559131 1928285 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 12:50:48.559178 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:50:48.561164 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.561479 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:48.561509 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.561662 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:50:48.561830 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:50:48.561994 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:50:48.562121 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:50:48.645199 1928285 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 12:50:48.649723 1928285 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 12:50:48.649752 1928285 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 12:50:48.649825 1928285 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 12:50:48.649848 1928285 start.go:296] duration metric: took 91.008429ms for postStartSetup
	I0120 12:50:48.649901 1928285 main.go:141] libmachine: (addons-917221) Calling .GetConfigRaw
	I0120 12:50:48.650519 1928285 main.go:141] libmachine: (addons-917221) Calling .GetIP
	I0120 12:50:48.653316 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.653794 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:48.653825 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.654057 1928285 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/config.json ...
	I0120 12:50:48.654263 1928285 start.go:128] duration metric: took 26.238295069s to createHost
	I0120 12:50:48.654293 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:50:48.656739 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.657061 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:48.657104 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.657242 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:50:48.657398 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:50:48.657544 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:50:48.657677 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:50:48.657800 1928285 main.go:141] libmachine: Using SSH client type: native
	I0120 12:50:48.657983 1928285 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0120 12:50:48.657996 1928285 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 12:50:48.767738 1928285 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737377448.742639302
	
	I0120 12:50:48.767768 1928285 fix.go:216] guest clock: 1737377448.742639302
	I0120 12:50:48.767780 1928285 fix.go:229] Guest: 2025-01-20 12:50:48.742639302 +0000 UTC Remote: 2025-01-20 12:50:48.654277615 +0000 UTC m=+26.353701529 (delta=88.361687ms)
	I0120 12:50:48.767842 1928285 fix.go:200] guest clock delta is within tolerance: 88.361687ms
	I0120 12:50:48.767850 1928285 start.go:83] releasing machines lock for "addons-917221", held for 26.352012136s
	I0120 12:50:48.767886 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:50:48.768174 1928285 main.go:141] libmachine: (addons-917221) Calling .GetIP
	I0120 12:50:48.771318 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.772204 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:48.772229 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.772485 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:50:48.773094 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:50:48.773314 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:50:48.773420 1928285 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 12:50:48.773487 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:50:48.773520 1928285 ssh_runner.go:195] Run: cat /version.json
	I0120 12:50:48.773544 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:50:48.776397 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.776510 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.776747 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:48.776777 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:48.776806 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.776849 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:48.776968 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:50:48.777117 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:50:48.777192 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:50:48.777332 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:50:48.777399 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:50:48.777443 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:50:48.777523 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:50:48.777580 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:50:48.878196 1928285 ssh_runner.go:195] Run: systemctl --version
	I0120 12:50:48.884400 1928285 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 12:50:49.050947 1928285 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 12:50:49.057154 1928285 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 12:50:49.057225 1928285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 12:50:49.074895 1928285 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 12:50:49.074936 1928285 start.go:495] detecting cgroup driver to use...
	I0120 12:50:49.075018 1928285 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 12:50:49.092636 1928285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 12:50:49.107580 1928285 docker.go:217] disabling cri-docker service (if available) ...
	I0120 12:50:49.107677 1928285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 12:50:49.123147 1928285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 12:50:49.138143 1928285 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 12:50:49.257888 1928285 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 12:50:49.426641 1928285 docker.go:233] disabling docker service ...
	I0120 12:50:49.426716 1928285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 12:50:49.442203 1928285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 12:50:49.455681 1928285 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 12:50:49.581489 1928285 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 12:50:49.693271 1928285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 12:50:49.707620 1928285 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 12:50:49.726563 1928285 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 12:50:49.726657 1928285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:50:49.737487 1928285 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 12:50:49.737568 1928285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:50:49.748490 1928285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:50:49.759512 1928285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:50:49.770443 1928285 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 12:50:49.781592 1928285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:50:49.792281 1928285 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:50:49.810349 1928285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 12:50:49.821561 1928285 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 12:50:49.831338 1928285 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 12:50:49.831408 1928285 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 12:50:49.845201 1928285 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 12:50:49.855306 1928285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:50:49.974154 1928285 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 12:50:50.069614 1928285 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 12:50:50.069704 1928285 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 12:50:50.074678 1928285 start.go:563] Will wait 60s for crictl version
	I0120 12:50:50.074765 1928285 ssh_runner.go:195] Run: which crictl
	I0120 12:50:50.078793 1928285 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 12:50:50.121034 1928285 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 12:50:50.121134 1928285 ssh_runner.go:195] Run: crio --version
	I0120 12:50:50.149899 1928285 ssh_runner.go:195] Run: crio --version
	I0120 12:50:50.180528 1928285 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 12:50:50.181770 1928285 main.go:141] libmachine: (addons-917221) Calling .GetIP
	I0120 12:50:50.184447 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:50.184755 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:50:50.184797 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:50:50.184964 1928285 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 12:50:50.189226 1928285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:50:50.202792 1928285 kubeadm.go:883] updating cluster {Name:addons-917221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-917221 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 12:50:50.202955 1928285 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 12:50:50.203012 1928285 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:50:50.238325 1928285 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 12:50:50.238414 1928285 ssh_runner.go:195] Run: which lz4
	I0120 12:50:50.242629 1928285 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 12:50:50.247141 1928285 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 12:50:50.247179 1928285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 12:50:51.693656 1928285 crio.go:462] duration metric: took 1.451071713s to copy over tarball
	I0120 12:50:51.693776 1928285 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 12:50:53.982455 1928285 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.288636912s)
	I0120 12:50:53.982501 1928285 crio.go:469] duration metric: took 2.288799299s to extract the tarball
	I0120 12:50:53.982515 1928285 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 12:50:54.025341 1928285 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 12:50:54.071740 1928285 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 12:50:54.071768 1928285 cache_images.go:84] Images are preloaded, skipping loading
	I0120 12:50:54.071778 1928285 kubeadm.go:934] updating node { 192.168.39.225 8443 v1.32.0 crio true true} ...
	I0120 12:50:54.071902 1928285 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-917221 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:addons-917221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 12:50:54.071987 1928285 ssh_runner.go:195] Run: crio config
	I0120 12:50:54.125736 1928285 cni.go:84] Creating CNI manager for ""
	I0120 12:50:54.125768 1928285 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:50:54.125783 1928285 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 12:50:54.125806 1928285 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.225 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-917221 NodeName:addons-917221 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 12:50:54.125975 1928285 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-917221"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.225"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.225"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 12:50:54.126057 1928285 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 12:50:54.136285 1928285 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 12:50:54.136361 1928285 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 12:50:54.146287 1928285 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0120 12:50:54.164845 1928285 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 12:50:54.183867 1928285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0120 12:50:54.202939 1928285 ssh_runner.go:195] Run: grep 192.168.39.225	control-plane.minikube.internal$ /etc/hosts
	I0120 12:50:54.207002 1928285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 12:50:54.219928 1928285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:50:54.358564 1928285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:50:54.376908 1928285 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221 for IP: 192.168.39.225
	I0120 12:50:54.376938 1928285 certs.go:194] generating shared ca certs ...
	I0120 12:50:54.376961 1928285 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:50:54.377132 1928285 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 12:50:54.544833 1928285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt ...
	I0120 12:50:54.544879 1928285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt: {Name:mk937b7dddaa314d51a0910bcc8092832a79f20b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:50:54.545094 1928285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key ...
	I0120 12:50:54.545115 1928285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key: {Name:mk23f022a00731ed79fb5dc6612b8537a74ebaa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:50:54.545231 1928285 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 12:50:54.837026 1928285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt ...
	I0120 12:50:54.837070 1928285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt: {Name:mk5f2c5dbd91a0608c0e3970de07cfe369373a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:50:54.837280 1928285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key ...
	I0120 12:50:54.837307 1928285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key: {Name:mkdfa08a66756eb785eb6a431773d106687f130d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:50:54.837416 1928285 certs.go:256] generating profile certs ...
	I0120 12:50:54.837497 1928285 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.key
	I0120 12:50:54.837517 1928285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt with IP's: []
	I0120 12:50:54.900934 1928285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt ...
	I0120 12:50:54.900973 1928285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: {Name:mk781b8bced3a75d5e15756577a44267ea67cbc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:50:54.901168 1928285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.key ...
	I0120 12:50:54.901180 1928285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.key: {Name:mkda5f8f6aca9662c8ae109a3f60795c38cec437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:50:54.901248 1928285 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/apiserver.key.9c220ce5
	I0120 12:50:54.901268 1928285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/apiserver.crt.9c220ce5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.225]
	I0120 12:50:54.994397 1928285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/apiserver.crt.9c220ce5 ...
	I0120 12:50:54.994435 1928285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/apiserver.crt.9c220ce5: {Name:mk81a639f5a52e5e5ef64bc39eba65f5631832c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:50:54.994628 1928285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/apiserver.key.9c220ce5 ...
	I0120 12:50:54.994643 1928285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/apiserver.key.9c220ce5: {Name:mk0ae4e00b234cd7c79b6a5383f7fc963d5c730b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:50:54.994722 1928285 certs.go:381] copying /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/apiserver.crt.9c220ce5 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/apiserver.crt
	I0120 12:50:54.994805 1928285 certs.go:385] copying /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/apiserver.key.9c220ce5 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/apiserver.key
	I0120 12:50:54.994855 1928285 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/proxy-client.key
	I0120 12:50:54.994872 1928285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/proxy-client.crt with IP's: []
	I0120 12:50:55.109367 1928285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/proxy-client.crt ...
	I0120 12:50:55.109403 1928285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/proxy-client.crt: {Name:mk0c1b070bd9cf662cb8ed8cfaf4817fb7f70ef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:50:55.109567 1928285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/proxy-client.key ...
	I0120 12:50:55.109583 1928285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/proxy-client.key: {Name:mkd9e9063b1a896a1925f45615f3f89bc3ed0e27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:50:55.109751 1928285 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 12:50:55.109788 1928285 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 12:50:55.109812 1928285 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 12:50:55.109836 1928285 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 12:50:55.110470 1928285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 12:50:55.154296 1928285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 12:50:55.195098 1928285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 12:50:55.220937 1928285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 12:50:55.246224 1928285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 12:50:55.273261 1928285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 12:50:55.298935 1928285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 12:50:55.324556 1928285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 12:50:55.349923 1928285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 12:50:55.376256 1928285 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 12:50:55.394711 1928285 ssh_runner.go:195] Run: openssl version
	I0120 12:50:55.400933 1928285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 12:50:55.413514 1928285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:50:55.418808 1928285 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:50:55.418876 1928285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 12:50:55.424897 1928285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 12:50:55.436481 1928285 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 12:50:55.440842 1928285 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 12:50:55.440899 1928285 kubeadm.go:392] StartCluster: {Name:addons-917221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-917221 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:50:55.440981 1928285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 12:50:55.441028 1928285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 12:50:55.481403 1928285 cri.go:89] found id: ""
	I0120 12:50:55.481499 1928285 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 12:50:55.492244 1928285 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 12:50:55.502774 1928285 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 12:50:55.513350 1928285 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 12:50:55.513377 1928285 kubeadm.go:157] found existing configuration files:
	
	I0120 12:50:55.513438 1928285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 12:50:55.524029 1928285 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 12:50:55.524168 1928285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 12:50:55.534882 1928285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 12:50:55.545506 1928285 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 12:50:55.545609 1928285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 12:50:55.558180 1928285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 12:50:55.568116 1928285 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 12:50:55.568198 1928285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 12:50:55.578275 1928285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 12:50:55.587893 1928285 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 12:50:55.587957 1928285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 12:50:55.598187 1928285 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 12:50:55.660308 1928285 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 12:50:55.660375 1928285 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 12:50:55.770770 1928285 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 12:50:55.770894 1928285 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 12:50:55.771058 1928285 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 12:50:55.781237 1928285 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 12:50:55.887415 1928285 out.go:235]   - Generating certificates and keys ...
	I0120 12:50:55.887590 1928285 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 12:50:55.887711 1928285 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 12:50:55.992510 1928285 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 12:50:56.104242 1928285 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 12:50:56.235827 1928285 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 12:50:56.387906 1928285 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 12:50:56.529447 1928285 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 12:50:56.529667 1928285 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-917221 localhost] and IPs [192.168.39.225 127.0.0.1 ::1]
	I0120 12:50:56.632926 1928285 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 12:50:56.633113 1928285 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-917221 localhost] and IPs [192.168.39.225 127.0.0.1 ::1]
	I0120 12:50:56.882381 1928285 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 12:50:57.042656 1928285 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 12:50:57.181643 1928285 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 12:50:57.181759 1928285 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 12:50:57.577655 1928285 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 12:50:57.791352 1928285 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 12:50:58.022329 1928285 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 12:50:58.301353 1928285 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 12:50:58.410076 1928285 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 12:50:58.410741 1928285 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 12:50:58.413058 1928285 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 12:50:58.415281 1928285 out.go:235]   - Booting up control plane ...
	I0120 12:50:58.415389 1928285 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 12:50:58.415471 1928285 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 12:50:58.415577 1928285 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 12:50:58.431875 1928285 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 12:50:58.440151 1928285 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 12:50:58.440229 1928285 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 12:50:58.584962 1928285 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 12:50:58.585164 1928285 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 12:50:59.085585 1928285 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.073921ms
	I0120 12:50:59.085713 1928285 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 12:51:04.087867 1928285 kubeadm.go:310] [api-check] The API server is healthy after 5.003941574s
	I0120 12:51:04.100864 1928285 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 12:51:04.123587 1928285 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 12:51:04.164979 1928285 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 12:51:04.165227 1928285 kubeadm.go:310] [mark-control-plane] Marking the node addons-917221 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 12:51:04.182155 1928285 kubeadm.go:310] [bootstrap-token] Using token: 26zwk5.pyqdfjn767eykgxe
	I0120 12:51:04.183841 1928285 out.go:235]   - Configuring RBAC rules ...
	I0120 12:51:04.183997 1928285 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 12:51:04.197666 1928285 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 12:51:04.212585 1928285 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 12:51:04.222416 1928285 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 12:51:04.234280 1928285 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 12:51:04.246166 1928285 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 12:51:04.498105 1928285 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 12:51:04.918957 1928285 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 12:51:05.493385 1928285 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 12:51:05.494151 1928285 kubeadm.go:310] 
	I0120 12:51:05.494257 1928285 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 12:51:05.494268 1928285 kubeadm.go:310] 
	I0120 12:51:05.494406 1928285 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 12:51:05.494441 1928285 kubeadm.go:310] 
	I0120 12:51:05.494479 1928285 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 12:51:05.494536 1928285 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 12:51:05.494581 1928285 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 12:51:05.494587 1928285 kubeadm.go:310] 
	I0120 12:51:05.494651 1928285 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 12:51:05.494659 1928285 kubeadm.go:310] 
	I0120 12:51:05.494698 1928285 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 12:51:05.494705 1928285 kubeadm.go:310] 
	I0120 12:51:05.494754 1928285 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 12:51:05.494838 1928285 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 12:51:05.494923 1928285 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 12:51:05.494931 1928285 kubeadm.go:310] 
	I0120 12:51:05.495004 1928285 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 12:51:05.495103 1928285 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 12:51:05.495120 1928285 kubeadm.go:310] 
	I0120 12:51:05.495238 1928285 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 26zwk5.pyqdfjn767eykgxe \
	I0120 12:51:05.495403 1928285 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 \
	I0120 12:51:05.495442 1928285 kubeadm.go:310] 	--control-plane 
	I0120 12:51:05.495451 1928285 kubeadm.go:310] 
	I0120 12:51:05.495526 1928285 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 12:51:05.495534 1928285 kubeadm.go:310] 
	I0120 12:51:05.495637 1928285 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 26zwk5.pyqdfjn767eykgxe \
	I0120 12:51:05.495781 1928285 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 
	I0120 12:51:05.496592 1928285 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 12:51:05.496624 1928285 cni.go:84] Creating CNI manager for ""
	I0120 12:51:05.496635 1928285 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:51:05.499628 1928285 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 12:51:05.500987 1928285 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 12:51:05.513902 1928285 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 12:51:05.543242 1928285 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 12:51:05.543381 1928285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:51:05.543445 1928285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-917221 minikube.k8s.io/updated_at=2025_01_20T12_51_05_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=addons-917221 minikube.k8s.io/primary=true
	I0120 12:51:05.705218 1928285 ops.go:34] apiserver oom_adj: -16
	I0120 12:51:05.705296 1928285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:51:06.205684 1928285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:51:06.706290 1928285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:51:07.205704 1928285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:51:07.706302 1928285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:51:08.206202 1928285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:51:08.705325 1928285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:51:09.205441 1928285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 12:51:09.311089 1928285 kubeadm.go:1113] duration metric: took 3.767787402s to wait for elevateKubeSystemPrivileges
	I0120 12:51:09.311148 1928285 kubeadm.go:394] duration metric: took 13.870254491s to StartCluster
	I0120 12:51:09.311173 1928285 settings.go:142] acquiring lock: {Name:mka51395421b81550a6a78e164d8aa21a9ede347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:51:09.311329 1928285 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 12:51:09.312011 1928285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 12:51:09.312265 1928285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 12:51:09.312408 1928285 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 12:51:09.312563 1928285 config.go:182] Loaded profile config "addons-917221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:51:09.312495 1928285 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0120 12:51:09.312624 1928285 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-917221"
	I0120 12:51:09.312633 1928285 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-917221"
	I0120 12:51:09.312640 1928285 addons.go:69] Setting registry=true in profile "addons-917221"
	I0120 12:51:09.312654 1928285 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-917221"
	I0120 12:51:09.312689 1928285 addons.go:69] Setting cloud-spanner=true in profile "addons-917221"
	I0120 12:51:09.312678 1928285 addons.go:69] Setting ingress=true in profile "addons-917221"
	I0120 12:51:09.312708 1928285 addons.go:238] Setting addon cloud-spanner=true in "addons-917221"
	I0120 12:51:09.312709 1928285 addons.go:69] Setting gcp-auth=true in profile "addons-917221"
	I0120 12:51:09.312721 1928285 addons.go:238] Setting addon ingress=true in "addons-917221"
	I0120 12:51:09.312725 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:09.312730 1928285 mustload.go:65] Loading cluster: addons-917221
	I0120 12:51:09.312636 1928285 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-917221"
	I0120 12:51:09.312768 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:09.312650 1928285 addons.go:69] Setting metrics-server=true in profile "addons-917221"
	I0120 12:51:09.312841 1928285 addons.go:238] Setting addon metrics-server=true in "addons-917221"
	I0120 12:51:09.312883 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:09.312893 1928285 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-917221"
	I0120 12:51:09.312928 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:09.312952 1928285 config.go:182] Loaded profile config "addons-917221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 12:51:09.313246 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.313274 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.313281 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.313306 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.312662 1928285 addons.go:238] Setting addon registry=true in "addons-917221"
	I0120 12:51:09.313362 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:09.313368 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.313396 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.313437 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.312667 1928285 addons.go:69] Setting storage-provisioner=true in profile "addons-917221"
	I0120 12:51:09.313470 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.313246 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.312667 1928285 addons.go:69] Setting ingress-dns=true in profile "addons-917221"
	I0120 12:51:09.313576 1928285 addons.go:238] Setting addon ingress-dns=true in "addons-917221"
	I0120 12:51:09.313589 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.313624 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:09.313709 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.313743 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.313469 1928285 addons.go:238] Setting addon storage-provisioner=true in "addons-917221"
	I0120 12:51:09.314058 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.314094 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:09.314106 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.314207 1928285 out.go:177] * Verifying Kubernetes components...
	I0120 12:51:09.312676 1928285 addons.go:69] Setting volcano=true in profile "addons-917221"
	I0120 12:51:09.314543 1928285 addons.go:238] Setting addon volcano=true in "addons-917221"
	I0120 12:51:09.314575 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:09.314974 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.314996 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.312682 1928285 addons.go:69] Setting volumesnapshots=true in profile "addons-917221"
	I0120 12:51:09.315183 1928285 addons.go:238] Setting addon volumesnapshots=true in "addons-917221"
	I0120 12:51:09.315224 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:09.312683 1928285 addons.go:69] Setting inspektor-gadget=true in profile "addons-917221"
	I0120 12:51:09.315441 1928285 addons.go:238] Setting addon inspektor-gadget=true in "addons-917221"
	I0120 12:51:09.315476 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:09.315611 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.315645 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.312624 1928285 addons.go:69] Setting yakd=true in profile "addons-917221"
	I0120 12:51:09.315866 1928285 addons.go:238] Setting addon yakd=true in "addons-917221"
	I0120 12:51:09.315898 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:09.312687 1928285 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-917221"
	I0120 12:51:09.315987 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:09.315991 1928285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 12:51:09.312739 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:09.316278 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.316312 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.316387 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.316414 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.316482 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.316511 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.312672 1928285 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-917221"
	I0120 12:51:09.318432 1928285 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-917221"
	I0120 12:51:09.318870 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.318928 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.335091 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0120 12:51:09.312692 1928285 addons.go:69] Setting default-storageclass=true in profile "addons-917221"
	I0120 12:51:09.338740 1928285 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-917221"
	I0120 12:51:09.338744 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37189
	I0120 12:51:09.338752 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36795
	I0120 12:51:09.338820 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0120 12:51:09.339159 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.339195 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.339208 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.339245 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.339388 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34225
	I0120 12:51:09.339591 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43411
	I0120 12:51:09.339765 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.339810 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.339836 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.339909 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.339982 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.340083 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.340778 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.340801 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.340876 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.340876 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.340896 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.340783 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.340954 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.341047 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.341065 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.341123 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.341570 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.341601 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.341821 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.341934 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.341943 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.341982 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.342016 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.342148 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.342508 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.342522 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.343143 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.343246 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.343691 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.343692 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.344304 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.344351 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.344397 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:09.344762 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.344787 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.344896 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.344911 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.345225 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.345832 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.345878 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.363907 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0120 12:51:09.363988 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
	I0120 12:51:09.364330 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43259
	I0120 12:51:09.364425 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.364586 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.364788 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.364984 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.364996 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.365132 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.365145 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.365295 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.365507 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.365546 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.365677 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.365698 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.366168 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.366205 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.366219 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.366471 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.367826 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:09.368809 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:09.370324 1928285 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0120 12:51:09.371087 1928285 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0120 12:51:09.372190 1928285 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 12:51:09.372211 1928285 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 12:51:09.372237 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:51:09.372430 1928285 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0120 12:51:09.372440 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0120 12:51:09.372456 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:51:09.372903 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0120 12:51:09.373337 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.373867 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.373891 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.374226 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.374959 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.375006 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.377920 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.378667 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.379331 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:51:09.379374 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.379756 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:51:09.379789 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.379980 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:51:09.380174 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:51:09.380223 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:51:09.380448 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:51:09.380486 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:51:09.380667 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:51:09.381006 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:51:09.381070 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34063
	I0120 12:51:09.381353 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:51:09.381640 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.384039 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.384060 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.384124 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33975
	I0120 12:51:09.384615 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.385214 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.385241 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.385786 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.386353 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.386370 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.386898 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.387570 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.387609 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.395527 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40667
	I0120 12:51:09.395703 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38375
	I0120 12:51:09.396057 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.396157 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.396547 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.396563 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.396665 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35435
	I0120 12:51:09.396709 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.396725 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.397023 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.397132 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.397323 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.397372 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.397581 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.399222 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.399241 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.399474 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:09.399653 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.401001 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:09.401067 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I0120 12:51:09.400640 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33723
	I0120 12:51:09.400820 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.401333 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.401775 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.402286 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.402386 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.402406 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.402773 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.402959 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.402972 1928285 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0120 12:51:09.402979 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.402988 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.403078 1928285 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0120 12:51:09.403211 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39963
	I0120 12:51:09.403366 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.403641 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.403728 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.404182 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.404205 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.404528 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.405114 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.405157 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.405463 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:09.406375 1928285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0120 12:51:09.407378 1928285 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0120 12:51:09.407816 1928285 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-917221"
	I0120 12:51:09.408035 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:09.407923 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32769
	I0120 12:51:09.407952 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45485
	I0120 12:51:09.408420 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.408915 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.408950 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I0120 12:51:09.408969 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.408473 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.409297 1928285 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0120 12:51:09.409724 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.409598 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.409770 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.409613 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42579
	I0120 12:51:09.409749 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.410083 1928285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0120 12:51:09.410141 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.410581 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.410316 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.410820 1928285 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0120 12:51:09.410840 1928285 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0120 12:51:09.410861 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:51:09.411325 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.411366 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.411510 1928285 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0120 12:51:09.411739 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.411757 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.411823 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I0120 12:51:09.411893 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.412476 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.412663 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.412742 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:09.412926 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.412942 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.413185 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.413318 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.413501 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.413717 1928285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0120 12:51:09.414062 1928285 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0120 12:51:09.414087 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0120 12:51:09.414106 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:51:09.414674 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.414696 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.415082 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.415946 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.416047 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.416185 1928285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0120 12:51:09.417254 1928285 addons.go:238] Setting addon default-storageclass=true in "addons-917221"
	I0120 12:51:09.417295 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:09.417415 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.417807 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:51:09.417849 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.418010 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.418056 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.418305 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:51:09.418424 1928285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0120 12:51:09.418455 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.418549 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:51:09.418855 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:51:09.418918 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:51:09.418943 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.419115 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:51:09.419417 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:09.419496 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:51:09.419654 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:51:09.419784 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:51:09.419880 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:51:09.420915 1928285 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0120 12:51:09.420961 1928285 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0120 12:51:09.422077 1928285 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0120 12:51:09.422117 1928285 out.go:177]   - Using image docker.io/registry:2.8.3
	I0120 12:51:09.423100 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37231
	I0120 12:51:09.423580 1928285 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0120 12:51:09.423605 1928285 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0120 12:51:09.423627 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:51:09.423635 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.423821 1928285 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0120 12:51:09.423838 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0120 12:51:09.423854 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:51:09.424245 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.424262 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.424705 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.424941 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.426802 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:09.427100 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:09.427114 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:09.429460 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.429677 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:09.429709 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:09.429716 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:09.429723 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:09.429730 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:09.430260 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:51:09.430282 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.430303 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:09.430325 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:09.430331 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	W0120 12:51:09.430430 1928285 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0120 12:51:09.430572 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.430637 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:51:09.430895 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:51:09.430960 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:51:09.430974 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.431041 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:51:09.431260 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:51:09.431308 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:51:09.431485 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:51:09.431781 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:51:09.431958 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:51:09.432225 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35633
	I0120 12:51:09.432330 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42957
	I0120 12:51:09.432871 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.433401 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.433421 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.433930 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.434012 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44581
	I0120 12:51:09.434251 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.434378 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.434781 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.434856 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.434871 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.435185 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.435409 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.436396 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:09.436910 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.436928 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.437341 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.437740 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:09.437770 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.438289 1928285 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0120 12:51:09.439649 1928285 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0120 12:51:09.439754 1928285 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0120 12:51:09.439772 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0120 12:51:09.439792 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:51:09.440275 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:09.440945 1928285 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0120 12:51:09.440965 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0120 12:51:09.440984 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:51:09.441735 1928285 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 12:51:09.443032 1928285 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:51:09.443055 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 12:51:09.443074 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:51:09.443865 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38957
	I0120 12:51:09.444420 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.444769 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.445054 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.445077 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.445439 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.445810 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:51:09.445835 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.446020 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:51:09.446173 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:51:09.446372 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:51:09.447286 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.447335 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.447586 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:51:09.447610 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.447637 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:51:09.447659 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.448083 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:51:09.448199 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:51:09.448292 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:51:09.448380 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:51:09.449313 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.449670 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:51:09.449683 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.449819 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:51:09.449947 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:51:09.450122 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:51:09.450205 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:51:09.453566 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37607
	I0120 12:51:09.454038 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.454403 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.454414 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.454761 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.454987 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.456754 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:09.458806 1928285 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0120 12:51:09.460372 1928285 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0120 12:51:09.460393 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0120 12:51:09.460415 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:51:09.461040 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40657
	I0120 12:51:09.461527 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.462110 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.462130 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.462220 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36027
	I0120 12:51:09.462510 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.462762 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.463256 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:09.463488 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:09.463537 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.463564 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.464048 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.464085 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.464288 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.464530 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:51:09.464551 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.464865 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:51:09.465053 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:51:09.465195 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:51:09.465306 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:51:09.466364 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:09.468325 1928285 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0120 12:51:09.468852 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39677
	I0120 12:51:09.469296 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.469695 1928285 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0120 12:51:09.469711 1928285 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0120 12:51:09.469731 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:51:09.469962 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.469994 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.470713 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.470939 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.472613 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:09.473349 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.473796 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:51:09.473816 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.474087 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:51:09.474302 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:51:09.474391 1928285 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.0
	I0120 12:51:09.474521 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:51:09.474829 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:51:09.475132 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34355
	I0120 12:51:09.475489 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:09.476001 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.476023 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.476065 1928285 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0120 12:51:09.476078 1928285 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0120 12:51:09.476104 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:51:09.476695 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.476914 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.478668 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:09.478857 1928285 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 12:51:09.478870 1928285 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 12:51:09.478884 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:51:09.479701 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.480212 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:51:09.480242 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.480396 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:51:09.480616 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:51:09.480810 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:51:09.481003 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:51:09.482317 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.482812 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:51:09.482837 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.483007 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:51:09.483184 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:51:09.483289 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:51:09.483397 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:51:09.483673 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35625
	I0120 12:51:09.484093 1928285 main.go:141] libmachine: () Calling .GetVersion
	W0120 12:51:09.484233 1928285 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53180->192.168.39.225:22: read: connection reset by peer
	I0120 12:51:09.484266 1928285 retry.go:31] will retry after 327.685187ms: ssh: handshake failed: read tcp 192.168.39.1:53180->192.168.39.225:22: read: connection reset by peer
	I0120 12:51:09.484522 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:09.484537 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:09.484869 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:09.485009 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:09.486599 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:09.488553 1928285 out.go:177]   - Using image docker.io/busybox:stable
	I0120 12:51:09.489996 1928285 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0120 12:51:09.491911 1928285 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0120 12:51:09.491933 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0120 12:51:09.491956 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:51:09.494872 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.495211 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:51:09.495239 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:09.495361 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:51:09.495593 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:51:09.495739 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:51:09.495919 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:51:09.874093 1928285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 12:51:09.874139 1928285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 12:51:09.891516 1928285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0120 12:51:09.969731 1928285 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0120 12:51:09.969759 1928285 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0120 12:51:09.978624 1928285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0120 12:51:10.139005 1928285 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0120 12:51:10.139051 1928285 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0120 12:51:10.145093 1928285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0120 12:51:10.150026 1928285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0120 12:51:10.153125 1928285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0120 12:51:10.154198 1928285 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0120 12:51:10.154216 1928285 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0120 12:51:10.155061 1928285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 12:51:10.161021 1928285 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0120 12:51:10.161055 1928285 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0120 12:51:10.166056 1928285 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0120 12:51:10.166076 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0120 12:51:10.167260 1928285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0120 12:51:10.168730 1928285 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0120 12:51:10.168755 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0120 12:51:10.183954 1928285 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 12:51:10.183992 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0120 12:51:10.263093 1928285 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0120 12:51:10.263132 1928285 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0120 12:51:10.316242 1928285 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0120 12:51:10.316269 1928285 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0120 12:51:10.323229 1928285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 12:51:10.414720 1928285 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0120 12:51:10.414751 1928285 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0120 12:51:10.421941 1928285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0120 12:51:10.453081 1928285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0120 12:51:10.520146 1928285 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 12:51:10.520181 1928285 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 12:51:10.542988 1928285 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0120 12:51:10.543083 1928285 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0120 12:51:10.547934 1928285 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0120 12:51:10.547963 1928285 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0120 12:51:10.641847 1928285 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0120 12:51:10.641875 1928285 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0120 12:51:10.755645 1928285 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0120 12:51:10.755675 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0120 12:51:10.858826 1928285 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0120 12:51:10.858858 1928285 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0120 12:51:10.868687 1928285 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:51:10.868720 1928285 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 12:51:11.000774 1928285 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0120 12:51:11.000805 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0120 12:51:11.010756 1928285 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0120 12:51:11.010788 1928285 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0120 12:51:11.036579 1928285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 12:51:11.219467 1928285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0120 12:51:11.285299 1928285 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0120 12:51:11.285351 1928285 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0120 12:51:11.345736 1928285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0120 12:51:11.714975 1928285 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0120 12:51:11.715014 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0120 12:51:12.116216 1928285 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0120 12:51:12.116248 1928285 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0120 12:51:12.361504 1928285 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0120 12:51:12.361548 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0120 12:51:12.675956 1928285 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.80177339s)
	I0120 12:51:12.676000 1928285 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0120 12:51:12.676072 1928285 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.801930374s)
	I0120 12:51:12.676156 1928285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.784604699s)
	I0120 12:51:12.676216 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:12.676248 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:12.676701 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:12.676760 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:12.676799 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:12.676819 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:12.676831 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:12.677089 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:12.677103 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:12.677116 1928285 node_ready.go:35] waiting up to 6m0s for node "addons-917221" to be "Ready" ...
	I0120 12:51:12.681392 1928285 node_ready.go:49] node "addons-917221" has status "Ready":"True"
	I0120 12:51:12.681413 1928285 node_ready.go:38] duration metric: took 4.204244ms for node "addons-917221" to be "Ready" ...
	I0120 12:51:12.681424 1928285 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:51:12.688885 1928285 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0120 12:51:12.688908 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0120 12:51:12.692527 1928285 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-hld8l" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:13.156329 1928285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.1776434s)
	I0120 12:51:13.156395 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:13.156414 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:13.156738 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:13.156795 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:13.156813 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:13.156830 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:13.157149 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:13.157169 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:13.185640 1928285 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-917221" context rescaled to 1 replicas
	I0120 12:51:13.201485 1928285 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0120 12:51:13.201522 1928285 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0120 12:51:13.620542 1928285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0120 12:51:14.725042 1928285 pod_ready.go:103] pod "amd-gpu-device-plugin-hld8l" in "kube-system" namespace has status "Ready":"False"
	I0120 12:51:16.254854 1928285 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0120 12:51:16.254910 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:51:16.258836 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:16.259343 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:51:16.259381 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:16.259648 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:51:16.259838 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:51:16.259944 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:51:16.260146 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:51:16.471985 1928285 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0120 12:51:16.702831 1928285 addons.go:238] Setting addon gcp-auth=true in "addons-917221"
	I0120 12:51:16.702902 1928285 host.go:66] Checking if "addons-917221" exists ...
	I0120 12:51:16.703352 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:16.703410 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:16.720765 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43141
	I0120 12:51:16.721248 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:16.721956 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:16.721986 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:16.722446 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:16.722937 1928285 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 12:51:16.722978 1928285 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 12:51:16.741078 1928285 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39953
	I0120 12:51:16.741628 1928285 main.go:141] libmachine: () Calling .GetVersion
	I0120 12:51:16.742258 1928285 main.go:141] libmachine: Using API Version  1
	I0120 12:51:16.742287 1928285 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 12:51:16.742634 1928285 main.go:141] libmachine: () Calling .GetMachineName
	I0120 12:51:16.742906 1928285 main.go:141] libmachine: (addons-917221) Calling .GetState
	I0120 12:51:16.744768 1928285 main.go:141] libmachine: (addons-917221) Calling .DriverName
	I0120 12:51:16.745043 1928285 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0120 12:51:16.745082 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHHostname
	I0120 12:51:16.748172 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:16.748658 1928285 main.go:141] libmachine: (addons-917221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:bf:78", ip: ""} in network mk-addons-917221: {Iface:virbr1 ExpiryTime:2025-01-20 13:50:38 +0000 UTC Type:0 Mac:52:54:00:68:bf:78 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-917221 Clientid:01:52:54:00:68:bf:78}
	I0120 12:51:16.748687 1928285 main.go:141] libmachine: (addons-917221) DBG | domain addons-917221 has defined IP address 192.168.39.225 and MAC address 52:54:00:68:bf:78 in network mk-addons-917221
	I0120 12:51:16.748889 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHPort
	I0120 12:51:16.749124 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHKeyPath
	I0120 12:51:16.749300 1928285 main.go:141] libmachine: (addons-917221) Calling .GetSSHUsername
	I0120 12:51:16.749472 1928285 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/addons-917221/id_rsa Username:docker}
	I0120 12:51:17.199420 1928285 pod_ready.go:103] pod "amd-gpu-device-plugin-hld8l" in "kube-system" namespace has status "Ready":"False"
	I0120 12:51:19.005014 1928285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.859869786s)
	I0120 12:51:19.005096 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.005101 1928285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.855006345s)
	I0120 12:51:19.005116 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.005181 1928285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.852005799s)
	I0120 12:51:19.005151 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.005219 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.005232 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.005241 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.005212 1928285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.850133775s)
	I0120 12:51:19.005293 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.005302 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.005361 1928285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.838070881s)
	I0120 12:51:19.005381 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.005447 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.005461 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.005471 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.005478 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.005873 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.005895 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.005904 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.005915 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.005923 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.005925 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.005930 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.005934 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.005937 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.005943 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.006225 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.006242 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.006251 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.006279 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.006299 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.006304 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.006313 1928285 addons.go:479] Verifying addon ingress=true in "addons-917221"
	I0120 12:51:19.006525 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.006787 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.007029 1928285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.683774592s)
	I0120 12:51:19.007054 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.007073 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.007470 1928285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.585494481s)
	I0120 12:51:19.007493 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.007502 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.007753 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.007786 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.007793 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.007800 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.007810 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.007855 1928285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.554737344s)
	I0120 12:51:19.007865 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.007877 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.007883 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.007887 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.007889 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.007896 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.007902 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.007948 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.007965 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.007970 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.007977 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.007983 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.008020 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.008035 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.008041 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.008332 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.008455 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.008460 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.008464 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.008479 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.008486 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.008495 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.008516 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.008574 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.008583 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.008593 1928285 addons.go:479] Verifying addon registry=true in "addons-917221"
	I0120 12:51:19.008832 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.009542 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.009584 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.009591 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.009675 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.009698 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.009704 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.010977 1928285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.974358824s)
	I0120 12:51:19.011018 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.011030 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.011113 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.011138 1928285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.791628963s)
	I0120 12:51:19.011161 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.011171 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.011337 1928285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.665558536s)
	W0120 12:51:19.011366 1928285 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0120 12:51:19.011408 1928285 retry.go:31] will retry after 348.649143ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0120 12:51:19.011492 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.011558 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.011566 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.011576 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.011583 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.011785 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.011797 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.011974 1928285 out.go:177] * Verifying registry addon...
	I0120 12:51:19.012039 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.012060 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.012067 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.012074 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.012081 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.012081 1928285 out.go:177] * Verifying ingress addon...
	I0120 12:51:19.012288 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.012343 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.012360 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.012498 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.012528 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.013266 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.012692 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.012776 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.013338 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.013350 1928285 addons.go:479] Verifying addon metrics-server=true in "addons-917221"
	I0120 12:51:19.014698 1928285 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0120 12:51:19.014711 1928285 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-917221 service yakd-dashboard -n yakd-dashboard
	
	I0120 12:51:19.014982 1928285 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0120 12:51:19.025383 1928285 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0120 12:51:19.025415 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:19.025596 1928285 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0120 12:51:19.025624 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:19.058936 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.058966 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.058998 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:19.059020 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:19.059301 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.059340 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:19.059358 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.059370 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:19.059397 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:19.059408 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	W0120 12:51:19.059514 1928285 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0120 12:51:19.202477 1928285 pod_ready.go:103] pod "amd-gpu-device-plugin-hld8l" in "kube-system" namespace has status "Ready":"False"
	I0120 12:51:19.361206 1928285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0120 12:51:19.531589 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:19.531724 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:20.044396 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:20.045119 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:20.577800 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:20.586872 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:20.781993 1928285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.161393676s)
	I0120 12:51:20.782038 1928285 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.036962545s)
	I0120 12:51:20.782051 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:20.782068 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:20.782413 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:20.782450 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:20.782459 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:20.782467 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:20.782474 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:20.782796 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:20.782828 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:20.782844 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:20.782855 1928285 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-917221"
	I0120 12:51:20.784024 1928285 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0120 12:51:20.784952 1928285 out.go:177] * Verifying csi-hostpath-driver addon...
	I0120 12:51:20.786540 1928285 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0120 12:51:20.787639 1928285 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0120 12:51:20.787654 1928285 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0120 12:51:20.787760 1928285 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0120 12:51:20.820636 1928285 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0120 12:51:20.820661 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:20.895561 1928285 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0120 12:51:20.895604 1928285 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0120 12:51:21.011198 1928285 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0120 12:51:21.011228 1928285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0120 12:51:21.032123 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:21.032318 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:21.118259 1928285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0120 12:51:21.294870 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:21.519203 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:21.520537 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:21.698592 1928285 pod_ready.go:103] pod "amd-gpu-device-plugin-hld8l" in "kube-system" namespace has status "Ready":"False"
	I0120 12:51:21.795757 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:22.019320 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:22.021708 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:22.113815 1928285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.752540365s)
	I0120 12:51:22.113888 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:22.113907 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:22.114227 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:22.114254 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:22.114270 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:22.114280 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:22.114288 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:22.114728 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:22.114745 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:22.114757 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:22.293494 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:22.519480 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:22.520386 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:22.829433 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:23.074628 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:23.075060 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:23.126061 1928285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.007751131s)
	I0120 12:51:23.126133 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:23.126146 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:23.126543 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:23.126567 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:23.126582 1928285 main.go:141] libmachine: Making call to close driver server
	I0120 12:51:23.126592 1928285 main.go:141] libmachine: (addons-917221) Calling .Close
	I0120 12:51:23.126866 1928285 main.go:141] libmachine: Successfully made call to close driver server
	I0120 12:51:23.126886 1928285 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 12:51:23.126917 1928285 main.go:141] libmachine: (addons-917221) DBG | Closing plugin on server side
	I0120 12:51:23.127958 1928285 addons.go:479] Verifying addon gcp-auth=true in "addons-917221"
	I0120 12:51:23.129813 1928285 out.go:177] * Verifying gcp-auth addon...
	I0120 12:51:23.132567 1928285 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0120 12:51:23.150084 1928285 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0120 12:51:23.150124 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:23.313363 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:23.524555 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:23.525025 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:23.637220 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:23.700039 1928285 pod_ready.go:103] pod "amd-gpu-device-plugin-hld8l" in "kube-system" namespace has status "Ready":"False"
	I0120 12:51:23.793818 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:24.020107 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:24.020867 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:24.136456 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:24.292792 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:24.519375 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:24.519736 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:24.636435 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:24.793471 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:25.020934 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:25.021256 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:25.137022 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:25.292695 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:25.519174 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:25.519446 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:25.636328 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:25.793133 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:26.022361 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:26.022649 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:26.136410 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:26.198993 1928285 pod_ready.go:103] pod "amd-gpu-device-plugin-hld8l" in "kube-system" namespace has status "Ready":"False"
	I0120 12:51:26.293185 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:26.520533 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:26.522486 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:26.636391 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:26.938161 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:27.038563 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:27.038884 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:27.136092 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:27.293129 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:27.519559 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:27.521111 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:27.637949 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:27.793730 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:28.021683 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:28.022415 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:28.137237 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:28.201902 1928285 pod_ready.go:103] pod "amd-gpu-device-plugin-hld8l" in "kube-system" namespace has status "Ready":"False"
	I0120 12:51:28.294210 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:28.523480 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:28.523646 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:28.636239 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:28.793839 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:29.020750 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:29.020960 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:29.137230 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:29.292544 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:29.519461 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:29.520051 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:29.642929 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:29.794674 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:30.020108 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:30.020400 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:30.139491 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:30.292946 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:30.519239 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:30.519507 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:30.636331 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:30.698415 1928285 pod_ready.go:93] pod "amd-gpu-device-plugin-hld8l" in "kube-system" namespace has status "Ready":"True"
	I0120 12:51:30.698440 1928285 pod_ready.go:82] duration metric: took 18.005886284s for pod "amd-gpu-device-plugin-hld8l" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:30.698449 1928285 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-mxmhx" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:30.702856 1928285 pod_ready.go:93] pod "coredns-668d6bf9bc-mxmhx" in "kube-system" namespace has status "Ready":"True"
	I0120 12:51:30.702880 1928285 pod_ready.go:82] duration metric: took 4.424032ms for pod "coredns-668d6bf9bc-mxmhx" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:30.702888 1928285 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rzkvk" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:30.704356 1928285 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-rzkvk" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-rzkvk" not found
	I0120 12:51:30.704375 1928285 pod_ready.go:82] duration metric: took 1.480841ms for pod "coredns-668d6bf9bc-rzkvk" in "kube-system" namespace to be "Ready" ...
	E0120 12:51:30.704384 1928285 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-rzkvk" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-rzkvk" not found
	I0120 12:51:30.704391 1928285 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-917221" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:30.708125 1928285 pod_ready.go:93] pod "etcd-addons-917221" in "kube-system" namespace has status "Ready":"True"
	I0120 12:51:30.708141 1928285 pod_ready.go:82] duration metric: took 3.744953ms for pod "etcd-addons-917221" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:30.708149 1928285 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-917221" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:30.712336 1928285 pod_ready.go:93] pod "kube-apiserver-addons-917221" in "kube-system" namespace has status "Ready":"True"
	I0120 12:51:30.712352 1928285 pod_ready.go:82] duration metric: took 4.197877ms for pod "kube-apiserver-addons-917221" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:30.712360 1928285 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-917221" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:30.792619 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:30.896614 1928285 pod_ready.go:93] pod "kube-controller-manager-addons-917221" in "kube-system" namespace has status "Ready":"True"
	I0120 12:51:30.896642 1928285 pod_ready.go:82] duration metric: took 184.275561ms for pod "kube-controller-manager-addons-917221" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:30.896652 1928285 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nmjdt" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:31.020414 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:31.020813 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:31.138427 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:31.292606 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:31.297351 1928285 pod_ready.go:93] pod "kube-proxy-nmjdt" in "kube-system" namespace has status "Ready":"True"
	I0120 12:51:31.297377 1928285 pod_ready.go:82] duration metric: took 400.71823ms for pod "kube-proxy-nmjdt" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:31.297387 1928285 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-917221" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:31.519380 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:31.520800 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:31.636712 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:31.697319 1928285 pod_ready.go:93] pod "kube-scheduler-addons-917221" in "kube-system" namespace has status "Ready":"True"
	I0120 12:51:31.697349 1928285 pod_ready.go:82] duration metric: took 399.953462ms for pod "kube-scheduler-addons-917221" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:31.697370 1928285 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-pf9mc" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:31.792104 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:32.022451 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:32.023214 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:32.136894 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:32.293157 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:32.520107 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:32.520422 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:32.636753 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:32.792416 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:33.021399 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:33.022017 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:33.137373 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:33.292979 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:33.519546 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:33.520095 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:33.636308 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:33.704918 1928285 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pf9mc" in "kube-system" namespace has status "Ready":"False"
	I0120 12:51:33.793896 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:34.021646 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:34.021897 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:34.136651 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:34.292858 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:34.519752 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:34.519929 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:34.636684 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:35.259913 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:35.260221 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:35.260760 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:35.261110 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:35.358888 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:35.520049 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:35.520209 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:35.635777 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:35.793042 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:36.019843 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:36.020401 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:36.135808 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:36.203053 1928285 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pf9mc" in "kube-system" namespace has status "Ready":"False"
	I0120 12:51:36.293202 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:36.520189 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:36.520301 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:36.636180 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:36.792340 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:37.020373 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:37.020425 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:37.136058 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:37.299206 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:37.573804 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:37.573825 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:37.639308 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:37.792370 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:38.020400 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:38.021170 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:38.136533 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:38.204203 1928285 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pf9mc" in "kube-system" namespace has status "Ready":"False"
	I0120 12:51:38.292535 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:38.519347 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:38.519906 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:38.647664 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:38.795931 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:39.148876 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:39.149182 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:39.149502 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:39.293418 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:39.524375 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:39.525084 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:39.636639 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:39.793039 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:40.019580 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:40.020167 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:40.135633 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:40.204879 1928285 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pf9mc" in "kube-system" namespace has status "Ready":"False"
	I0120 12:51:40.293442 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:40.518593 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:40.519846 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:40.636615 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:40.793013 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:41.019084 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:41.019459 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:41.136260 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:41.292643 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:41.520232 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:41.520971 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:41.637032 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:41.792984 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:42.019366 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:42.019767 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:42.136834 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:42.293577 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:42.519341 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:42.519517 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:42.636249 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:42.704283 1928285 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pf9mc" in "kube-system" namespace has status "Ready":"False"
	I0120 12:51:42.830287 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:43.022956 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:43.023482 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:43.136757 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:43.292327 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:43.521351 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:43.521648 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:43.637157 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:43.792533 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:44.020153 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:44.020602 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:44.136308 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:44.203597 1928285 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-pf9mc" in "kube-system" namespace has status "Ready":"True"
	I0120 12:51:44.203623 1928285 pod_ready.go:82] duration metric: took 12.506244928s for pod "nvidia-device-plugin-daemonset-pf9mc" in "kube-system" namespace to be "Ready" ...
	I0120 12:51:44.203631 1928285 pod_ready.go:39] duration metric: took 31.522195698s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 12:51:44.203656 1928285 api_server.go:52] waiting for apiserver process to appear ...
	I0120 12:51:44.203709 1928285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 12:51:44.243050 1928285 api_server.go:72] duration metric: took 34.930588496s to wait for apiserver process to appear ...
	I0120 12:51:44.243139 1928285 api_server.go:88] waiting for apiserver healthz status ...
	I0120 12:51:44.243190 1928285 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0120 12:51:44.248745 1928285 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I0120 12:51:44.249797 1928285 api_server.go:141] control plane version: v1.32.0
	I0120 12:51:44.249821 1928285 api_server.go:131] duration metric: took 6.659496ms to wait for apiserver health ...
	I0120 12:51:44.249829 1928285 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 12:51:44.258211 1928285 system_pods.go:59] 18 kube-system pods found
	I0120 12:51:44.258243 1928285 system_pods.go:61] "amd-gpu-device-plugin-hld8l" [462eae70-d849-4770-ba59-02ea700a1d89] Running
	I0120 12:51:44.258248 1928285 system_pods.go:61] "coredns-668d6bf9bc-mxmhx" [08731564-a661-4c67-bf10-c8b25ebab244] Running
	I0120 12:51:44.258255 1928285 system_pods.go:61] "csi-hostpath-attacher-0" [3f945bf7-84eb-4aca-a890-1829977b61c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0120 12:51:44.258262 1928285 system_pods.go:61] "csi-hostpath-resizer-0" [be6ab695-0956-4e8d-a885-c03398c00dda] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0120 12:51:44.258269 1928285 system_pods.go:61] "csi-hostpathplugin-59m7t" [cbe259a1-e553-48b4-9470-c307d5a4471a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0120 12:51:44.258275 1928285 system_pods.go:61] "etcd-addons-917221" [562cc52d-b072-4c67-8bde-e0a559ee4ecc] Running
	I0120 12:51:44.258279 1928285 system_pods.go:61] "kube-apiserver-addons-917221" [d213bb49-b562-4025-b473-2fbdfe45b597] Running
	I0120 12:51:44.258282 1928285 system_pods.go:61] "kube-controller-manager-addons-917221" [44b0f31c-58b5-4bb4-82c2-84b09334ddc8] Running
	I0120 12:51:44.258286 1928285 system_pods.go:61] "kube-ingress-dns-minikube" [0af390de-4d03-4c24-b8ff-a393de33ff2d] Running
	I0120 12:51:44.258290 1928285 system_pods.go:61] "kube-proxy-nmjdt" [23587f04-d582-4420-8f1c-aa4187a7d011] Running
	I0120 12:51:44.258294 1928285 system_pods.go:61] "kube-scheduler-addons-917221" [4e44e021-8812-439a-aec0-52ee08621fc9] Running
	I0120 12:51:44.258298 1928285 system_pods.go:61] "metrics-server-7fbb699795-m2dgw" [78861784-257c-4bc8-88cc-1751f08124fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:51:44.258305 1928285 system_pods.go:61] "nvidia-device-plugin-daemonset-pf9mc" [a5b953b2-5067-4e24-9998-c91fb25aeaf0] Running
	I0120 12:51:44.258311 1928285 system_pods.go:61] "registry-6c88467877-ndwgb" [bca66afd-2437-496e-b76a-e829fe9f5952] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0120 12:51:44.258319 1928285 system_pods.go:61] "registry-proxy-sb2tk" [7dbae7cd-f142-43d6-8e97-d98b3ad5e51a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0120 12:51:44.258326 1928285 system_pods.go:61] "snapshot-controller-68b874b76f-986m7" [8ccac2dc-242e-4540-97d4-e3e8928c8d41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0120 12:51:44.258340 1928285 system_pods.go:61] "snapshot-controller-68b874b76f-hzbbm" [9613d10d-71d1-425d-990f-9e4650a90330] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0120 12:51:44.258346 1928285 system_pods.go:61] "storage-provisioner" [663fd4c4-254d-4d56-92cc-f4d6afbe402a] Running
	I0120 12:51:44.258354 1928285 system_pods.go:74] duration metric: took 8.518189ms to wait for pod list to return data ...
	I0120 12:51:44.258366 1928285 default_sa.go:34] waiting for default service account to be created ...
	I0120 12:51:44.261557 1928285 default_sa.go:45] found service account: "default"
	I0120 12:51:44.261585 1928285 default_sa.go:55] duration metric: took 3.207719ms for default service account to be created ...
	I0120 12:51:44.261598 1928285 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 12:51:44.270633 1928285 system_pods.go:87] 18 kube-system pods found
	I0120 12:51:44.275064 1928285 system_pods.go:105] "amd-gpu-device-plugin-hld8l" [462eae70-d849-4770-ba59-02ea700a1d89] Running
	I0120 12:51:44.275088 1928285 system_pods.go:105] "coredns-668d6bf9bc-mxmhx" [08731564-a661-4c67-bf10-c8b25ebab244] Running
	I0120 12:51:44.275098 1928285 system_pods.go:105] "csi-hostpath-attacher-0" [3f945bf7-84eb-4aca-a890-1829977b61c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0120 12:51:44.275104 1928285 system_pods.go:105] "csi-hostpath-resizer-0" [be6ab695-0956-4e8d-a885-c03398c00dda] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0120 12:51:44.275112 1928285 system_pods.go:105] "csi-hostpathplugin-59m7t" [cbe259a1-e553-48b4-9470-c307d5a4471a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0120 12:51:44.275117 1928285 system_pods.go:105] "etcd-addons-917221" [562cc52d-b072-4c67-8bde-e0a559ee4ecc] Running
	I0120 12:51:44.275123 1928285 system_pods.go:105] "kube-apiserver-addons-917221" [d213bb49-b562-4025-b473-2fbdfe45b597] Running
	I0120 12:51:44.275128 1928285 system_pods.go:105] "kube-controller-manager-addons-917221" [44b0f31c-58b5-4bb4-82c2-84b09334ddc8] Running
	I0120 12:51:44.275132 1928285 system_pods.go:105] "kube-ingress-dns-minikube" [0af390de-4d03-4c24-b8ff-a393de33ff2d] Running
	I0120 12:51:44.275137 1928285 system_pods.go:105] "kube-proxy-nmjdt" [23587f04-d582-4420-8f1c-aa4187a7d011] Running
	I0120 12:51:44.275141 1928285 system_pods.go:105] "kube-scheduler-addons-917221" [4e44e021-8812-439a-aec0-52ee08621fc9] Running
	I0120 12:51:44.275147 1928285 system_pods.go:105] "metrics-server-7fbb699795-m2dgw" [78861784-257c-4bc8-88cc-1751f08124fa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 12:51:44.275155 1928285 system_pods.go:105] "nvidia-device-plugin-daemonset-pf9mc" [a5b953b2-5067-4e24-9998-c91fb25aeaf0] Running
	I0120 12:51:44.275162 1928285 system_pods.go:105] "registry-6c88467877-ndwgb" [bca66afd-2437-496e-b76a-e829fe9f5952] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0120 12:51:44.275169 1928285 system_pods.go:105] "registry-proxy-sb2tk" [7dbae7cd-f142-43d6-8e97-d98b3ad5e51a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0120 12:51:44.275180 1928285 system_pods.go:105] "snapshot-controller-68b874b76f-986m7" [8ccac2dc-242e-4540-97d4-e3e8928c8d41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0120 12:51:44.275189 1928285 system_pods.go:105] "snapshot-controller-68b874b76f-hzbbm" [9613d10d-71d1-425d-990f-9e4650a90330] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0120 12:51:44.275194 1928285 system_pods.go:105] "storage-provisioner" [663fd4c4-254d-4d56-92cc-f4d6afbe402a] Running
	I0120 12:51:44.275205 1928285 system_pods.go:147] duration metric: took 13.598925ms to wait for k8s-apps to be running ...
	I0120 12:51:44.275215 1928285 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 12:51:44.275264 1928285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 12:51:44.293126 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:44.308136 1928285 system_svc.go:56] duration metric: took 32.908163ms WaitForService to wait for kubelet
	I0120 12:51:44.308174 1928285 kubeadm.go:582] duration metric: took 34.99572302s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 12:51:44.308198 1928285 node_conditions.go:102] verifying NodePressure condition ...
	I0120 12:51:44.311919 1928285 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 12:51:44.311951 1928285 node_conditions.go:123] node cpu capacity is 2
	I0120 12:51:44.311966 1928285 node_conditions.go:105] duration metric: took 3.762511ms to run NodePressure ...
	I0120 12:51:44.311980 1928285 start.go:241] waiting for startup goroutines ...
	I0120 12:51:44.518452 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:44.519846 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:44.636889 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:44.793341 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:45.021488 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:45.021989 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:45.137038 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:45.293278 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:45.520264 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:45.520520 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:45.636613 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:45.792358 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:46.019327 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:46.019882 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:46.136618 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:46.292781 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:46.519780 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:46.520090 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:46.636027 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:46.792460 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:47.019823 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:47.020465 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:47.136702 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:47.292849 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:47.520035 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:47.520287 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:47.636824 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:47.793434 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:48.019120 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:48.019577 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:48.136807 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:48.293797 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:48.520358 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:48.520547 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:48.636861 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:48.793400 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:49.020383 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:49.021029 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:49.136679 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:49.293070 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:49.519583 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:49.519916 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:49.636838 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:49.793145 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:50.020896 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:50.021612 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:50.136728 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:50.295058 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:50.519949 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:50.520663 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:50.636957 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:50.792819 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:51.022258 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:51.022460 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:51.136602 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:51.293333 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:51.741662 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:51.742044 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:51.742288 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:51.793636 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:52.019946 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:52.020623 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:52.137046 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:52.292448 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:52.520485 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:52.520762 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:52.636435 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:52.794855 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:53.023282 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:53.023364 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:53.137902 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:53.343768 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:53.521223 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:53.521474 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:53.639419 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:53.794858 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:54.020897 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:54.021143 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:54.142961 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:54.294752 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:54.519621 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:54.519773 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:54.637895 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:54.793703 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:55.022989 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0120 12:51:55.023801 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:55.137036 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:55.293494 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:55.518491 1928285 kapi.go:107] duration metric: took 36.503790266s to wait for kubernetes.io/minikube-addons=registry ...
	I0120 12:51:55.521633 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:55.636935 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:55.793722 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:56.018442 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:56.136593 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:56.434339 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:56.519766 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:56.636208 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:56.793577 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:57.019885 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:57.136563 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:57.293216 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:57.520030 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:57.637004 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:57.792508 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:58.021585 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:58.136591 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:58.293148 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:58.519234 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:58.637315 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:58.792760 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:59.018748 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:59.350334 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:59.352432 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:51:59.524000 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:51:59.637478 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:51:59.792985 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:00.019525 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:00.141372 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:00.294241 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:00.519317 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:00.637041 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:00.794357 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:01.019558 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:01.136801 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:01.299161 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:01.519556 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:01.636278 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:01.793036 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:02.020107 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:02.137288 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:02.293153 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:02.519450 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:02.636089 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:02.793079 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:03.020148 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:03.137201 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:03.293124 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:03.682691 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:03.684528 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:03.792668 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:04.020873 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:04.136952 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:04.294396 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:04.519375 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:04.635861 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:04.792743 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:05.019138 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:05.137140 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:05.292512 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:05.520000 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:05.636830 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:05.792778 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:06.019228 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:06.136524 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:06.293812 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:06.742690 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:06.743313 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:06.793920 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:07.019705 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:07.135933 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:07.293601 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:07.519954 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:07.639946 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:07.799559 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:08.021682 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:08.140045 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:08.294102 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:08.522350 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:08.636671 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:08.801891 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:09.018848 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:09.136726 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:09.293108 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:09.519593 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:09.636850 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:09.793171 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:10.020337 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:10.135569 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:10.292801 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:10.579220 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:10.636594 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:10.792804 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:11.018833 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:11.138237 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:11.293093 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:11.522918 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:11.639011 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:11.791949 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:12.019771 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:12.137202 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:12.295812 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:12.519223 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:12.638507 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:12.793225 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:13.019646 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:13.136561 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:13.293437 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:13.524671 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:13.638089 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:13.795715 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:14.019710 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:14.136057 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:14.292142 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:14.524097 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:14.636695 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:14.792033 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:15.019586 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:15.137513 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:15.293083 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:15.524680 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:15.636553 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:15.792606 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:16.020494 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:16.136020 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:16.292359 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:16.523017 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:16.637989 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:16.794178 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:17.019908 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:17.136541 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:17.292808 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:17.520636 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:17.637025 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:17.793263 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:18.024805 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:18.136770 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:18.292905 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:18.519441 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:18.636133 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:18.792930 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:19.020218 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:19.280041 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:19.292741 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:19.519942 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:19.635911 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:19.793177 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:20.021924 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:20.137116 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:20.294328 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:20.520167 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:20.636945 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:20.793273 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:21.021828 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:21.137734 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:21.295812 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:21.519875 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:21.637036 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:21.794049 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:22.021092 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:22.136829 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:22.293375 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:22.519814 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:22.638918 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:22.793470 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:23.019607 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:23.136162 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:23.293358 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:23.519406 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:23.636710 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:23.793404 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:24.019625 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:24.136026 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:24.293521 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:24.814989 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:24.815593 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:24.815858 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:25.022888 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:25.138926 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:25.297032 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:25.518933 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:25.636463 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:25.793568 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:26.020441 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:26.137403 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:26.294291 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0120 12:52:26.521362 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:26.636013 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:26.795159 1928285 kapi.go:107] duration metric: took 1m6.007401448s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0120 12:52:27.020131 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:27.138674 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:27.519474 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:27.636323 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:28.020996 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:28.137319 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:28.520887 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:28.636102 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:29.019542 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:29.135839 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:29.519514 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:29.636984 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:30.020732 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:30.136811 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:30.519992 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:30.636637 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:31.019719 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:31.136469 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:31.520057 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:31.637004 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:32.021127 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:32.274185 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:32.519443 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:32.636233 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:33.019158 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:33.140059 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:33.519868 1928285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0120 12:52:33.636947 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:34.019535 1928285 kapi.go:107] duration metric: took 1m15.00454413s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0120 12:52:34.136821 1928285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0120 12:52:34.637209 1928285 kapi.go:107] duration metric: took 1m11.504637746s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0120 12:52:34.638801 1928285 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-917221 cluster.
	I0120 12:52:34.640261 1928285 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0120 12:52:34.641883 1928285 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0120 12:52:34.643403 1928285 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, inspektor-gadget, storage-provisioner, amd-gpu-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0120 12:52:34.644677 1928285 addons.go:514] duration metric: took 1m25.332183127s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns inspektor-gadget storage-provisioner amd-gpu-device-plugin metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0120 12:52:34.644732 1928285 start.go:246] waiting for cluster config update ...
	I0120 12:52:34.644754 1928285 start.go:255] writing updated cluster config ...
	I0120 12:52:34.645043 1928285 ssh_runner.go:195] Run: rm -f paused
	I0120 12:52:34.700186 1928285 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 12:52:34.702221 1928285 out.go:177] * Done! kubectl is now configured to use "addons-917221" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.081007824Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:964a023e8d988e6ec4e532a332e351e6fefe74b634cff21088ba30cdbe2d080a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=58a991e6-fb38-4bea-a35a-d6d69da62102 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.081114515Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:964a023e8d988e6ec4e532a332e351e6fefe74b634cff21088ba30cdbe2d080a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1737377487793522064,StartedAt:1737377487824609921,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0af390de-4d03-4c24-b8ff-a393de33ff2d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container
.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/0af390de-4d03-4c24-b8ff-a393de33ff2d/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/0af390de-4d03-4c24-b8ff-a393de33ff2d/containers/minikube-ingress-dns/80649f56,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/0af390de-4d03-4c24-b8ff-a393de33ff2d/volumes/kubernetes.io~projected/kube-api-access-4qml6,Readonly:true,SelinuxRelabel:
false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-ingress-dns-minikube_0af390de-4d03-4c24-b8ff-a393de33ff2d/minikube-ingress-dns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=58a991e6-fb38-4bea-a35a-d6d69da62102 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.081613185Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:e88a6004c8bfa299f7ae498415e9b21b63fad5185ef07832fabcc0d803b1fbf6,Verbose:false,}" file="otel-collector/interceptors.go:62" id=03f54555-2820-4cd0-bb93-f61ceda5f047 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.081718723Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e88a6004c8bfa299f7ae498415e9b21b63fad5185ef07832fabcc0d803b1fbf6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1737377477763619835,StartedAt:1737377478271522203,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663fd4c4-254d-4d56-92cc-f4d6afbe402a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/663fd4c4-254d-4d56-92cc-f4d6afbe402a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/663fd4c4-254d-4d56-92cc-f4d6afbe402a/containers/storage-provisioner/49c797ac,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/663fd4c4-254d-4d56-92cc-f4d6afbe402a/volumes/kubernetes.io~projected/kube-api-access-8bdjf,Readonly:true,SelinuxRelabel:false,Pr
opagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_663fd4c4-254d-4d56-92cc-f4d6afbe402a/storage-provisioner/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=03f54555-2820-4cd0-bb93-f61ceda5f047 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.082229610Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:94ed682265a4814545308f1406a1740d7749cf95f2eccbd6184fc4b316ae1c02,Verbose:false,}" file="otel-collector/interceptors.go:62" id=314f4900-fb7c-4647-8477-656a5fea5f86 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.082385210Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:94ed682265a4814545308f1406a1740d7749cf95f2eccbd6184fc4b316ae1c02,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1737377475098355734,StartedAt:1737377475414230741,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mxmhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08731564-a661-4c67-bf10-c8b25ebab244,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/08731564-a661-4c67-bf10-c8b25ebab244/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/08731564-a661-4c67-bf10-c8b25ebab244/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/08731564-a661-4c67-bf10-c8b25ebab244/containers/coredns/f15212c2,Readonly:false,SelinuxRelabel:false,Propagation:PRO
PAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/08731564-a661-4c67-bf10-c8b25ebab244/volumes/kubernetes.io~projected/kube-api-access-l49s8,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-668d6bf9bc-mxmhx_08731564-a661-4c67-bf10-c8b25ebab244/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:982,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=314f4900-fb7c-4647-8477-656a5fea5f86 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.082878229Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:3e5085c22f6741c6b3778af76f289ca9e89e86a4fc8c3a1764303f5174986282,Verbose:false,}" file="otel-collector/interceptors.go:62" id=4244ef21-18c4-4616-a9ea-92c3d04ba169 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.083187141Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:3e5085c22f6741c6b3778af76f289ca9e89e86a4fc8c3a1764303f5174986282,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1737377471342880742,StartedAt:1737377471554078470,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.32.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmjdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23587f04-d582-4420-8f1c-aa4187a7d011,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/23587f04-d582-4420-8f1c-aa4187a7d011/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/23587f04-d582-4420-8f1c-aa4187a7d011/containers/kube-proxy/067224b8,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/
kubelet/pods/23587f04-d582-4420-8f1c-aa4187a7d011/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/23587f04-d582-4420-8f1c-aa4187a7d011/volumes/kubernetes.io~projected/kube-api-access-lcmgq,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-nmjdt_23587f04-d582-4420-8f1c-aa4187a7d011/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-colle
ctor/interceptors.go:74" id=4244ef21-18c4-4616-a9ea-92c3d04ba169 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.083896806Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:825f48d12e28b287a40b6846f6c259ab4927af379eb238898c778a953003164c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=39106358-b995-4bdd-b93f-5b1b7cd4d9a5 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.084172794Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:825f48d12e28b287a40b6846f6c259ab4927af379eb238898c778a953003164c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1737377459708828232,StartedAt:1737377459823739412,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.16-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-917221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3266cebc164968f2059c3d25ca7f8095,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/3266cebc164968f2059c3d25ca7f8095/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/3266cebc164968f2059c3d25ca7f8095/containers/etcd/0300ac52,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-addons-9
17221_3266cebc164968f2059c3d25ca7f8095/etcd/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=39106358-b995-4bdd-b93f-5b1b7cd4d9a5 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.084727229Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:e97266e6a915368f89181ff2044e91ae810e849d11382bd9c5a5198a5e96e28d,Verbose:false,}" file="otel-collector/interceptors.go:62" id=1960b5cb-5bd3-480f-997e-d85b9e88fb71 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.084822932Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e97266e6a915368f89181ff2044e91ae810e849d11382bd9c5a5198a5e96e28d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1737377459696659275,StartedAt:1737377459797522746,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.32.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-917221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a14aeea8629281737adbf7d5a47e481,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/6a14aeea8629281737adbf7d5a47e481/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/6a14aeea8629281737adbf7d5a47e481/containers/kube-apiserver/3b3ad942,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/
var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-addons-917221_6a14aeea8629281737adbf7d5a47e481/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=1960b5cb-5bd3-480f-997e-d85b9e88fb71 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.085551971Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ec13b6bb534423fe2914ca4398593b0760827f81334ba3280087b8d1bae57164,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7f65dbe3-1230-45d2-87aa-f0402d7cf495 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.085805872Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ec13b6bb534423fe2914ca4398593b0760827f81334ba3280087b8d1bae57164,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1737377459675499574,StartedAt:1737377459768357009,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.32.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-917221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fedbd7e1490dd5d9a8b2dd7c359a9d2,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/0fedbd7e1490dd5d9a8b2dd7c359a9d2/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/0fedbd7e1490dd5d9a8b2dd7c359a9d2/containers/kube-controller-manager/4c26db1b,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMapp
ings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-addons-917221_0fedbd7e1490dd5d9a8b2dd7c359a9d2/kube-controller-manager/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,Hugepag
eLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7f65dbe3-1230-45d2-87aa-f0402d7cf495 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.091720019Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:0387f3cd1067a891f1a4e47913de92ba5e0c400cbeb6ff8c33877d11217ea536,Verbose:false,}" file="otel-collector/interceptors.go:62" id=96c12ade-92c0-4565-b0f1-8f31bd8a3ba0 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.091857905Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:0387f3cd1067a891f1a4e47913de92ba5e0c400cbeb6ff8c33877d11217ea536,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1737377459609936568,StartedAt:1737377459703423601,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.32.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-917221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9777758b2e50da6546e06b73c535629f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/9777758b2e50da6546e06b73c535629f/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/9777758b2e50da6546e06b73c535629f/containers/kube-scheduler/8200c0e8,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-addons-917221_9777758b2e50da6546e06b73c535629f/kube-scheduler/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,
CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=96c12ade-92c0-4565-b0f1-8f31bd8a3ba0 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.128612852Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf698a23-0168-4ce8-bc77-14de8d384bd8 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.128685143Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf698a23-0168-4ce8-bc77-14de8d384bd8 name=/runtime.v1.RuntimeService/Version
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.130443870Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=525da45e-35d9-4e8b-9cc9-2c46c227ab14 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.131874391Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377745131847305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595276,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=525da45e-35d9-4e8b-9cc9-2c46c227ab14 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.132684135Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44f9e0cd-555d-4c6b-92d7-7e08efd008d4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.132766214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44f9e0cd-555d-4c6b-92d7-7e08efd008d4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.133070004Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:282156ccb6e96caf6ed91d8b7b3522f694ca1be32ba0e8bb0c09c62f17d459d6,PodSandboxId:9e2710acfb3acafc9cd8142168b8dcd5c5caa60909b605ea85b991ed2ad07a11,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737377605101180789,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 55dfbb36-858b-400d-a26c-3659038532ba,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ce8942a162c861ec51ad3cd93329a857170ae574e909007070225b9269338c,PodSandboxId:480674baba786eac8005e814d30693fd137a0a16afed71b61e0588bd78c98bda,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737377558143870445,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d68c02ea-4ab7-49a6-90c8-8ad183045335,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2d63032e8b4d3cba80bfb974fdfe7249a56e48410926fb730e3264fb89fa8a,PodSandboxId:e5c529992110525504bccd4e3e66267effdfee9fcd39c6a5c83add336036c8bf,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737377552625143440,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-v4txw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6646054f-e7b1-497a-b30e-3e93503322cc,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:437d5d8afb8a38e72a90f71e79f1dab46be0a182fe807b7fe4b617f26b8c0815,PodSandboxId:23e6179478be848e7b4ad7d056691b96e5805521dde42b81936309646653be96,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1737377531390712073,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-57vh5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8640bc18-4953-4443-b1dd-c3c507681025,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8543acf59df4350133864398c6bd6740d3ee8c0b5046be06c27d43cfe2930cd1,PodSandboxId:6f77c3f3da0c288fdfd1b6f5e0bdf075f5814fa217aad1ce6a0c7f4c0c8380fa,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737377531225966873,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5dpnt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8289d847-532f-4228-b9bf-2130f3d95348,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4aae86615ff9f739d5a91bc8b7b50ffb29ee7c5f200bdea1639c1f1593b18b0,PodSandboxId:f4b14b106a470f663e553bf49f192029832e465eada5577b8db89558c03c1975,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1737377521289340667,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-66jcz,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ca9c83bf-37b3-450d-b0a5-9a58938b9521,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf93d7894dcb4515fd30e2b41a6f0fd20050c52465f73305463248518e6fd53,PodSandboxId:f1ec3a6e40cdd52a35d30b9221ea0430e7a93d23739095a92b1c911263e1bba8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737377489641471852,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hld8l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462eae70-d849-4770-ba59-02ea700a1d89,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:964a023e8d988e6ec4e532a332e351e6fefe74b634cff21088ba30cdbe2d080a,PodSandboxId:4c1fd37cb65d2e672e368075c82a35f244e4d3cc0e147ecfd5aea55c214580d3,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d46
0978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737377487735794420,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0af390de-4d03-4c24-b8ff-a393de33ff2d,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e88a6004c8bfa299f7ae498415e9b21b63fad5185ef07832fabcc0d803b1fbf6,PodSandboxId:b86a4e237c68ed02cf1496cd26043d978aee7298358f387f5528889cfa323088,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737377477473391504,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 663fd4c4-254d-4d56-92cc-f4d6afbe402a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94ed682265a4814545308f1406a1740d7749cf95f2eccbd6184fc4b316ae1c02,PodSandboxId:9930b189405de44940743f25365046b073d89c6c916d85fdd5e618423009fb7d,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737377474608209715,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-mxmhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08731564-a661-4c67-bf10-c8b25ebab244,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:3e5085c22f6741c6b3778af76f289ca9e89e86a4fc8c3a1764303f5174986282,PodSandboxId:48b07c322826edfd536e0e05bd0927d67e822eaae3a72fe59b7bf6321ccb9897,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737377470710420885,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nmjdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23587f04-d582-4420-8f1c-aa4187a7d011,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:825f48d12e28b287a
40b6846f6c259ab4927af379eb238898c778a953003164c,PodSandboxId:421e8ba89350bf16e42714877e9cb1c8da0de66b5670d7e259b4834fdac93749,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737377459586055740,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-917221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3266cebc164968f2059c3d25ca7f8095,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e97266e6a915368f89181ff2044e91ae810e849d11382bd9c5a5198a5e96e28d
,PodSandboxId:5f293ee2f86d36ec7c6c17a8509e0e2db54592ae97d6bde706b873d74ca6f5f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737377459625794517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-917221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a14aeea8629281737adbf7d5a47e481,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec13b6bb534423fe2914ca4398593b0760827f81334ba3280087b8d1bae57164,PodSandboxId:42c
0bd647ad5eba4b6b183de12c4e420b34e3a1ba8d29455265c1be531af031b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737377459586584953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-917221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fedbd7e1490dd5d9a8b2dd7c359a9d2,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0387f3cd1067a891f1a4e47913de92ba5e0c400cbeb6ff8c33877d11217ea536,PodSan
dboxId:5002b6743792f4c56114656aec9de056690043bafca5d1e23c655ed7c47c2ed0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737377459560522632,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-917221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9777758b2e50da6546e06b73c535629f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44f9e0cd-555d-4c6b-92d7-7e08efd008d4 name=/runtime.v1
.RuntimeService/ListContainers
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.152378620Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1fc72662-94b2-43cf-9cfc-e3547c3c04d5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 12:55:45 addons-917221 crio[663]: time="2025-01-20 12:55:45.158015829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377745157983047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595276,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fc72662-94b2-43cf-9cfc-e3547c3c04d5 name=/runtime.v1.ImageService/ImageFsInfo
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	282156ccb6e96       docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901                              2 minutes ago       Running             nginx                     0                   9e2710acfb3ac       nginx
	f9ce8942a162c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   480674baba786       busybox
	ca2d63032e8b4       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   e5c5299921105       ingress-nginx-controller-56d7c84fd4-v4txw
	437d5d8afb8a3       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     1                   23e6179478be8       ingress-nginx-admission-patch-57vh5
	8543acf59df43       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   6f77c3f3da0c2       ingress-nginx-admission-create-5dpnt
	d4aae86615ff9       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   f4b14b106a470       local-path-provisioner-76f89f99b5-66jcz
	daf93d7894dcb       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   f1ec3a6e40cdd       amd-gpu-device-plugin-hld8l
	964a023e8d988       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   4c1fd37cb65d2       kube-ingress-dns-minikube
	e88a6004c8bfa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   b86a4e237c68e       storage-provisioner
	94ed682265a48       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   9930b189405de       coredns-668d6bf9bc-mxmhx
	3e5085c22f674       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08                                                             4 minutes ago       Running             kube-proxy                0                   48b07c322826e       kube-proxy-nmjdt
	e97266e6a9153       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                                             4 minutes ago       Running             kube-apiserver            0                   5f293ee2f86d3       kube-apiserver-addons-917221
	ec13b6bb53442       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3                                                             4 minutes ago       Running             kube-controller-manager   0                   42c0bd647ad5e       kube-controller-manager-addons-917221
	825f48d12e28b       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   421e8ba89350b       etcd-addons-917221
	0387f3cd1067a       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5                                                             4 minutes ago       Running             kube-scheduler            0                   5002b6743792f       kube-scheduler-addons-917221
	
	
	==> coredns [94ed682265a4814545308f1406a1740d7749cf95f2eccbd6184fc4b316ae1c02] <==
	[INFO] 10.244.0.8:48518 - 45587 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000123623s
	[INFO] 10.244.0.8:48518 - 47584 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000086657s
	[INFO] 10.244.0.8:48518 - 13172 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000073059s
	[INFO] 10.244.0.8:48518 - 59421 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000189921s
	[INFO] 10.244.0.8:48518 - 15694 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00017727s
	[INFO] 10.244.0.8:48518 - 13314 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000166079s
	[INFO] 10.244.0.8:48518 - 3496 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000141773s
	[INFO] 10.244.0.8:34132 - 49030 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000153297s
	[INFO] 10.244.0.8:34132 - 49332 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000146505s
	[INFO] 10.244.0.8:44884 - 8714 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000093937s
	[INFO] 10.244.0.8:44884 - 9164 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000136653s
	[INFO] 10.244.0.8:58246 - 6047 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092432s
	[INFO] 10.244.0.8:58246 - 5847 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00027801s
	[INFO] 10.244.0.8:43932 - 24714 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000109377s
	[INFO] 10.244.0.8:43932 - 24492 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000268021s
	[INFO] 10.244.0.23:55993 - 920 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000261687s
	[INFO] 10.244.0.23:40710 - 38197 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001049885s
	[INFO] 10.244.0.23:46972 - 35317 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000191603s
	[INFO] 10.244.0.23:39076 - 7507 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000082448s
	[INFO] 10.244.0.23:51107 - 29626 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000087943s
	[INFO] 10.244.0.23:45695 - 35514 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000232984s
	[INFO] 10.244.0.23:39154 - 48959 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003918038s
	[INFO] 10.244.0.23:53116 - 59342 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00370877s
	[INFO] 10.244.0.27:55158 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000414s
	[INFO] 10.244.0.27:58294 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000183225s
	
	
	==> describe nodes <==
	Name:               addons-917221
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-917221
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3
	                    minikube.k8s.io/name=addons-917221
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T12_51_05_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-917221
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 12:51:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-917221
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 12:55:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 12:53:38 +0000   Mon, 20 Jan 2025 12:51:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 12:53:38 +0000   Mon, 20 Jan 2025 12:51:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 12:53:38 +0000   Mon, 20 Jan 2025 12:51:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 12:53:38 +0000   Mon, 20 Jan 2025 12:51:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    addons-917221
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 951b808b1aae4fa2986b1c303330c50c
	  System UUID:                951b808b-1aae-4fa2-986b-1c303330c50c
	  Boot ID:                    ee60421a-1fa5-4402-98ae-dc1a08a769a6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	  default                     hello-world-app-7d9564db4-lg6mc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-v4txw    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m27s
	  kube-system                 amd-gpu-device-plugin-hld8l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 coredns-668d6bf9bc-mxmhx                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m36s
	  kube-system                 etcd-addons-917221                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m41s
	  kube-system                 kube-apiserver-addons-917221                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-controller-manager-addons-917221        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-proxy-nmjdt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 kube-scheduler-addons-917221                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  local-path-storage          local-path-provisioner-76f89f99b5-66jcz      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m33s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m46s (x8 over 4m47s)  kubelet          Node addons-917221 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s (x8 over 4m47s)  kubelet          Node addons-917221 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s (x7 over 4m47s)  kubelet          Node addons-917221 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m41s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m41s                  kubelet          Node addons-917221 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s                  kubelet          Node addons-917221 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s                  kubelet          Node addons-917221 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m39s                  kubelet          Node addons-917221 status is now: NodeReady
	  Normal  RegisteredNode           4m37s                  node-controller  Node addons-917221 event: Registered Node addons-917221 in Controller
	
	
	==> dmesg <==
	[  +4.147588] systemd-fstab-generator[870]: Ignoring "noauto" option for root device
	[  +0.941921] kauditd_printk_skb: 57 callbacks suppressed
	[Jan20 12:51] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.085832] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.945720] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.145506] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.116123] kauditd_printk_skb: 104 callbacks suppressed
	[  +5.004428] kauditd_printk_skb: 95 callbacks suppressed
	[  +5.141803] kauditd_printk_skb: 110 callbacks suppressed
	[ +18.003137] kauditd_printk_skb: 24 callbacks suppressed
	[  +6.372907] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.563027] kauditd_printk_skb: 6 callbacks suppressed
	[Jan20 12:52] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.368515] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.315189] kauditd_printk_skb: 34 callbacks suppressed
	[  +9.456686] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.529602] kauditd_printk_skb: 18 callbacks suppressed
	[ +15.361990] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.289810] kauditd_printk_skb: 6 callbacks suppressed
	[Jan20 12:53] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.516537] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.001461] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.096580] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.263477] kauditd_printk_skb: 48 callbacks suppressed
	[Jan20 12:55] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [825f48d12e28b287a40b6846f6c259ab4927af379eb238898c778a953003164c] <==
	{"level":"warn","ts":"2025-01-20T12:52:24.793880Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.981864ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:52:24.793950Z","caller":"traceutil/trace.go:171","msg":"trace[2025415363] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1122; }","duration":"264.074899ms","start":"2025-01-20T12:52:24.529869Z","end":"2025-01-20T12:52:24.793944Z","steps":["trace[2025415363] 'agreement among raft nodes before linearized reading'  (duration: 263.988933ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:52:24.792540Z","caller":"traceutil/trace.go:171","msg":"trace[1211804061] transaction","detail":"{read_only:false; response_revision:1122; number_of_response:1; }","duration":"346.650448ms","start":"2025-01-20T12:52:24.445772Z","end":"2025-01-20T12:52:24.792422Z","steps":["trace[1211804061] 'process raft request'  (duration: 343.728855ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:52:24.795146Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T12:52:24.445754Z","time spent":"348.964743ms","remote":"127.0.0.1:41170","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1088 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2025-01-20T12:52:32.256202Z","caller":"traceutil/trace.go:171","msg":"trace[1625914605] linearizableReadLoop","detail":"{readStateIndex:1173; appliedIndex:1172; }","duration":"134.561517ms","start":"2025-01-20T12:52:32.121627Z","end":"2025-01-20T12:52:32.256189Z","steps":["trace[1625914605] 'read index received'  (duration: 134.441078ms)","trace[1625914605] 'applied index is now lower than readState.Index'  (duration: 120.059µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T12:52:32.256424Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.7448ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:52:32.256467Z","caller":"traceutil/trace.go:171","msg":"trace[1528931714] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1138; }","duration":"134.859139ms","start":"2025-01-20T12:52:32.121601Z","end":"2025-01-20T12:52:32.256460Z","steps":["trace[1528931714] 'agreement among raft nodes before linearized reading'  (duration: 134.729908ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:52:32.256660Z","caller":"traceutil/trace.go:171","msg":"trace[132590318] transaction","detail":"{read_only:false; response_revision:1138; number_of_response:1; }","duration":"251.639825ms","start":"2025-01-20T12:52:32.004994Z","end":"2025-01-20T12:52:32.256634Z","steps":["trace[132590318] 'process raft request'  (duration: 251.117152ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:52:47.196077Z","caller":"traceutil/trace.go:171","msg":"trace[2077567384] transaction","detail":"{read_only:false; response_revision:1225; number_of_response:1; }","duration":"152.939944ms","start":"2025-01-20T12:52:47.043111Z","end":"2025-01-20T12:52:47.196051Z","steps":["trace[2077567384] 'process raft request'  (duration: 152.722922ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:53:04.127662Z","caller":"traceutil/trace.go:171","msg":"trace[1266764251] linearizableReadLoop","detail":"{readStateIndex:1416; appliedIndex:1415; }","duration":"332.443613ms","start":"2025-01-20T12:53:03.795204Z","end":"2025-01-20T12:53:04.127648Z","steps":["trace[1266764251] 'read index received'  (duration: 332.300739ms)","trace[1266764251] 'applied index is now lower than readState.Index'  (duration: 142.454µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T12:53:04.127753Z","caller":"traceutil/trace.go:171","msg":"trace[189955408] transaction","detail":"{read_only:false; response_revision:1372; number_of_response:1; }","duration":"402.400679ms","start":"2025-01-20T12:53:03.725338Z","end":"2025-01-20T12:53:04.127739Z","steps":["trace[189955408] 'process raft request'  (duration: 402.195664ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:53:04.127812Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T12:53:03.725322Z","time spent":"402.453703ms","remote":"127.0.0.1:41100","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1315,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/registry-test\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/registry-test\" value_size:1271 >> failure:<>"}
	{"level":"warn","ts":"2025-01-20T12:53:04.127856Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.996635ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T12:53:04.127890Z","caller":"traceutil/trace.go:171","msg":"trace[459800961] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1372; }","duration":"213.056584ms","start":"2025-01-20T12:53:03.914825Z","end":"2025-01-20T12:53:04.127882Z","steps":["trace[459800961] 'agreement among raft nodes before linearized reading'  (duration: 212.964062ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:53:04.128120Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"332.909367ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.225\" limit:1 ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-01-20T12:53:04.128147Z","caller":"traceutil/trace.go:171","msg":"trace[1509497636] range","detail":"{range_begin:/registry/masterleases/192.168.39.225; range_end:; response_count:1; response_revision:1372; }","duration":"332.960086ms","start":"2025-01-20T12:53:03.795181Z","end":"2025-01-20T12:53:04.128141Z","steps":["trace[1509497636] 'agreement among raft nodes before linearized reading'  (duration: 332.869565ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:53:04.128162Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T12:53:03.795162Z","time spent":"332.996185ms","remote":"127.0.0.1:40946","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":1,"response size":158,"request content":"key:\"/registry/masterleases/192.168.39.225\" limit:1 "}
	{"level":"warn","ts":"2025-01-20T12:53:05.215101Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.438314ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10100892832914920518 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io\" mod_revision:1381 > success:<request_put:<key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io\" value_size:10224 >> failure:<request_range:<key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-20T12:53:05.215241Z","caller":"traceutil/trace.go:171","msg":"trace[1931486989] linearizableReadLoop","detail":"{readStateIndex:1428; appliedIndex:1426; }","duration":"212.030502ms","start":"2025-01-20T12:53:05.003201Z","end":"2025-01-20T12:53:05.215231Z","steps":["trace[1931486989] 'read index received'  (duration: 9.206412ms)","trace[1931486989] 'applied index is now lower than readState.Index'  (duration: 202.82355ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T12:53:05.215253Z","caller":"traceutil/trace.go:171","msg":"trace[503587081] transaction","detail":"{read_only:false; response_revision:1382; number_of_response:1; }","duration":"220.636924ms","start":"2025-01-20T12:53:04.994602Z","end":"2025-01-20T12:53:05.215239Z","steps":["trace[503587081] 'process raft request'  (duration: 17.797383ms)","trace[503587081] 'compare'  (duration: 202.256849ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T12:53:05.215396Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.188402ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-01-20T12:53:05.215416Z","caller":"traceutil/trace.go:171","msg":"trace[2120723050] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1383; }","duration":"212.225421ms","start":"2025-01-20T12:53:05.003185Z","end":"2025-01-20T12:53:05.215410Z","steps":["trace[2120723050] 'agreement among raft nodes before linearized reading'  (duration: 212.078523ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:53:05.215568Z","caller":"traceutil/trace.go:171","msg":"trace[1470755910] transaction","detail":"{read_only:false; response_revision:1383; number_of_response:1; }","duration":"212.419245ms","start":"2025-01-20T12:53:05.003142Z","end":"2025-01-20T12:53:05.215561Z","steps":["trace[1470755910] 'process raft request'  (duration: 212.041284ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T12:53:24.922455Z","caller":"traceutil/trace.go:171","msg":"trace[688617094] transaction","detail":"{read_only:false; response_revision:1620; number_of_response:1; }","duration":"326.970275ms","start":"2025-01-20T12:53:24.595459Z","end":"2025-01-20T12:53:24.922430Z","steps":["trace[688617094] 'process raft request'  (duration: 326.600947ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T12:53:24.922647Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T12:53:24.595443Z","time spent":"327.11284ms","remote":"127.0.0.1:41078","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1598 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 12:55:45 up 5 min,  0 users,  load average: 0.62, 1.08, 0.57
	Linux addons-917221 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e97266e6a915368f89181ff2044e91ae810e849d11382bd9c5a5198a5e96e28d] <==
	E0120 12:51:54.100362       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.59.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.59.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.59.118:443: connect: connection refused" logger="UnhandledError"
	I0120 12:51:54.188355       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0120 12:52:43.711176       1 conn.go:339] Error on socket receive: read tcp 192.168.39.225:8443->192.168.39.1:55300: use of closed network connection
	E0120 12:52:43.904568       1 conn.go:339] Error on socket receive: read tcp 192.168.39.225:8443->192.168.39.1:55318: use of closed network connection
	I0120 12:52:53.530472       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.236.100"}
	I0120 12:53:04.984165       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0120 12:53:06.251532       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0120 12:53:14.990457       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0120 12:53:21.002547       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0120 12:53:21.192487       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.37.63"}
	I0120 12:53:34.415205       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0120 12:53:34.415542       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0120 12:53:34.440199       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0120 12:53:34.440444       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0120 12:53:34.470453       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0120 12:53:34.471475       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0120 12:53:34.559717       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0120 12:53:34.559806       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0120 12:53:34.603384       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0120 12:53:34.603504       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0120 12:53:35.559961       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0120 12:53:35.603333       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0120 12:53:35.623776       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0120 12:53:55.115813       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0120 12:55:43.876927       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.166.84"}
	
	
	==> kube-controller-manager [ec13b6bb534423fe2914ca4398593b0760827f81334ba3280087b8d1bae57164] <==
	E0120 12:54:44.406810       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0120 12:54:44.429501       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 12:54:44.431499       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0120 12:54:44.432806       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 12:54:44.432887       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0120 12:54:55.298413       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 12:54:55.299254       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0120 12:54:55.300146       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 12:54:55.300216       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0120 12:55:18.022755       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 12:55:18.023906       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0120 12:55:18.024882       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 12:55:18.024951       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0120 12:55:41.112627       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 12:55:41.113802       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0120 12:55:41.114692       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 12:55:41.114750       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0120 12:55:42.905405       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0120 12:55:42.906623       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0120 12:55:42.907478       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0120 12:55:42.907540       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0120 12:55:43.698072       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="45.595113ms"
	I0120 12:55:43.720513       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="22.312063ms"
	I0120 12:55:43.737148       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="16.542362ms"
	I0120 12:55:43.737317       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="87.95µs"
	
	
	==> kube-proxy [3e5085c22f6741c6b3778af76f289ca9e89e86a4fc8c3a1764303f5174986282] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 12:51:11.718171       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 12:51:11.730581       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.225"]
	E0120 12:51:11.730666       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 12:51:11.829524       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 12:51:11.829584       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 12:51:11.829608       1 server_linux.go:170] "Using iptables Proxier"
	I0120 12:51:11.833523       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 12:51:11.833883       1 server.go:497] "Version info" version="v1.32.0"
	I0120 12:51:11.833944       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 12:51:11.835684       1 config.go:199] "Starting service config controller"
	I0120 12:51:11.835739       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 12:51:11.835764       1 config.go:105] "Starting endpoint slice config controller"
	I0120 12:51:11.835768       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 12:51:11.839801       1 config.go:329] "Starting node config controller"
	I0120 12:51:11.839894       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 12:51:11.936298       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0120 12:51:11.936329       1 shared_informer.go:320] Caches are synced for service config
	I0120 12:51:11.940799       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0387f3cd1067a891f1a4e47913de92ba5e0c400cbeb6ff8c33877d11217ea536] <==
	W0120 12:51:02.115650       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 12:51:02.115686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:51:02.117977       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0120 12:51:02.120536       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:51:02.947779       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0120 12:51:02.947922       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:51:02.968421       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0120 12:51:02.968708       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:51:03.065242       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0120 12:51:03.065574       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:51:03.133923       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0120 12:51:03.134022       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0120 12:51:03.297352       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 12:51:03.298222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:51:03.300449       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 12:51:03.300498       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0120 12:51:03.312794       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0120 12:51:03.312850       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:51:03.317361       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 12:51:03.317407       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 12:51:03.327675       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0120 12:51:03.327728       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 12:51:03.396583       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0120 12:51:03.396933       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0120 12:51:05.398831       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 12:55:04 addons-917221 kubelet[1233]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 20 12:55:05 addons-917221 kubelet[1233]: E0120 12:55:05.142699    1233 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377705142242206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595276,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:55:05 addons-917221 kubelet[1233]: E0120 12:55:05.142739    1233 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377705142242206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595276,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:55:06 addons-917221 kubelet[1233]: I0120 12:55:06.829755    1233 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-hld8l" secret="" err="secret \"gcp-auth\" not found"
	Jan 20 12:55:15 addons-917221 kubelet[1233]: E0120 12:55:15.145865    1233 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377715145543084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595276,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:55:15 addons-917221 kubelet[1233]: E0120 12:55:15.146135    1233 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377715145543084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595276,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:55:25 addons-917221 kubelet[1233]: E0120 12:55:25.148143    1233 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377725147838768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595276,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:55:25 addons-917221 kubelet[1233]: E0120 12:55:25.148186    1233 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377725147838768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595276,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:55:35 addons-917221 kubelet[1233]: E0120 12:55:35.150869    1233 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377735150392371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595276,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:55:35 addons-917221 kubelet[1233]: E0120 12:55:35.150902    1233 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377735150392371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595276,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:55:43 addons-917221 kubelet[1233]: I0120 12:55:43.694456    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="5dccb816-ce80-4f01-b9a6-fa800a59ec91" containerName="task-pv-container"
	Jan 20 12:55:43 addons-917221 kubelet[1233]: I0120 12:55:43.694510    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="cbe259a1-e553-48b4-9470-c307d5a4471a" containerName="csi-external-health-monitor-controller"
	Jan 20 12:55:43 addons-917221 kubelet[1233]: I0120 12:55:43.694518    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="3f945bf7-84eb-4aca-a890-1829977b61c8" containerName="csi-attacher"
	Jan 20 12:55:43 addons-917221 kubelet[1233]: I0120 12:55:43.694524    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="cbe259a1-e553-48b4-9470-c307d5a4471a" containerName="node-driver-registrar"
	Jan 20 12:55:43 addons-917221 kubelet[1233]: I0120 12:55:43.694529    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="cbe259a1-e553-48b4-9470-c307d5a4471a" containerName="csi-snapshotter"
	Jan 20 12:55:43 addons-917221 kubelet[1233]: I0120 12:55:43.694534    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="cbe259a1-e553-48b4-9470-c307d5a4471a" containerName="liveness-probe"
	Jan 20 12:55:43 addons-917221 kubelet[1233]: I0120 12:55:43.694539    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="8ccac2dc-242e-4540-97d4-e3e8928c8d41" containerName="volume-snapshot-controller"
	Jan 20 12:55:43 addons-917221 kubelet[1233]: I0120 12:55:43.694544    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="9613d10d-71d1-425d-990f-9e4650a90330" containerName="volume-snapshot-controller"
	Jan 20 12:55:43 addons-917221 kubelet[1233]: I0120 12:55:43.694548    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="cbe259a1-e553-48b4-9470-c307d5a4471a" containerName="csi-provisioner"
	Jan 20 12:55:43 addons-917221 kubelet[1233]: I0120 12:55:43.694554    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="b86869c3-4327-49f7-adcb-8a3f69956acc" containerName="cloud-spanner-emulator"
	Jan 20 12:55:43 addons-917221 kubelet[1233]: I0120 12:55:43.694559    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="be6ab695-0956-4e8d-a885-c03398c00dda" containerName="csi-resizer"
	Jan 20 12:55:43 addons-917221 kubelet[1233]: I0120 12:55:43.694569    1233 memory_manager.go:355] "RemoveStaleState removing state" podUID="cbe259a1-e553-48b4-9470-c307d5a4471a" containerName="hostpath"
	Jan 20 12:55:43 addons-917221 kubelet[1233]: I0120 12:55:43.775976    1233 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgdgc\" (UniqueName: \"kubernetes.io/projected/694e4a16-c20e-4d3a-8809-80c2084023dd-kube-api-access-tgdgc\") pod \"hello-world-app-7d9564db4-lg6mc\" (UID: \"694e4a16-c20e-4d3a-8809-80c2084023dd\") " pod="default/hello-world-app-7d9564db4-lg6mc"
	Jan 20 12:55:45 addons-917221 kubelet[1233]: E0120 12:55:45.158722    1233 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377745157983047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595276,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 12:55:45 addons-917221 kubelet[1233]: E0120 12:55:45.158769    1233 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737377745157983047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595276,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [e88a6004c8bfa299f7ae498415e9b21b63fad5185ef07832fabcc0d803b1fbf6] <==
	I0120 12:51:18.344057       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 12:51:18.420508       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 12:51:18.423658       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 12:51:18.474016       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 12:51:18.479973       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"27bccc05-2d55-4989-8ff2-56bde27e0188", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-917221_e5ad5d00-2644-403c-bcb5-2064c01d7474 became leader
	I0120 12:51:18.487895       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-917221_e5ad5d00-2644-403c-bcb5-2064c01d7474!
	I0120 12:51:18.588091       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-917221_e5ad5d00-2644-403c-bcb5-2064c01d7474!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-917221 -n addons-917221
helpers_test.go:261: (dbg) Run:  kubectl --context addons-917221 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-lg6mc ingress-nginx-admission-create-5dpnt ingress-nginx-admission-patch-57vh5
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-917221 describe pod hello-world-app-7d9564db4-lg6mc ingress-nginx-admission-create-5dpnt ingress-nginx-admission-patch-57vh5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-917221 describe pod hello-world-app-7d9564db4-lg6mc ingress-nginx-admission-create-5dpnt ingress-nginx-admission-patch-57vh5: exit status 1 (76.405125ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-lg6mc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-917221/192.168.39.225
	Start Time:       Mon, 20 Jan 2025 12:55:43 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tgdgc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tgdgc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-lg6mc to addons-917221
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5dpnt" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-57vh5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-917221 describe pod hello-world-app-7d9564db4-lg6mc ingress-nginx-admission-create-5dpnt ingress-nginx-admission-patch-57vh5: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-917221 addons disable ingress --alsologtostderr -v=1: (7.773553582s)
--- FAIL: TestAddons/parallel/Ingress (154.41s)

                                                
                                    
x
+
TestPreload (165.39s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-988350 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-988350 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m35.815128294s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-988350 image pull gcr.io/k8s-minikube/busybox
E0120 13:46:26.557073 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-988350 image pull gcr.io/k8s-minikube/busybox: (1.413624646s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-988350
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-988350: (7.301032823s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-988350 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0120 13:47:19.679851 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-988350 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (57.837153775s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-988350 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-01-20 13:47:33.081512773 +0000 UTC m=+3444.099416098
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-988350 -n test-preload-988350
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-988350 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-988350 logs -n 25: (1.102333348s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-916430 ssh -n                                                                 | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:32 UTC | 20 Jan 25 13:32 UTC |
	|         | multinode-916430-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-916430 ssh -n multinode-916430 sudo cat                                       | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:32 UTC | 20 Jan 25 13:32 UTC |
	|         | /home/docker/cp-test_multinode-916430-m03_multinode-916430.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-916430 cp multinode-916430-m03:/home/docker/cp-test.txt                       | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:32 UTC | 20 Jan 25 13:32 UTC |
	|         | multinode-916430-m02:/home/docker/cp-test_multinode-916430-m03_multinode-916430-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-916430 ssh -n                                                                 | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:32 UTC | 20 Jan 25 13:32 UTC |
	|         | multinode-916430-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-916430 ssh -n multinode-916430-m02 sudo cat                                   | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:32 UTC | 20 Jan 25 13:32 UTC |
	|         | /home/docker/cp-test_multinode-916430-m03_multinode-916430-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-916430 node stop m03                                                          | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:32 UTC | 20 Jan 25 13:32 UTC |
	| node    | multinode-916430 node start                                                             | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:32 UTC | 20 Jan 25 13:33 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-916430                                                                | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:33 UTC |                     |
	| stop    | -p multinode-916430                                                                     | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:33 UTC | 20 Jan 25 13:36 UTC |
	| start   | -p multinode-916430                                                                     | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:36 UTC | 20 Jan 25 13:39 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-916430                                                                | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:39 UTC |                     |
	| node    | multinode-916430 node delete                                                            | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:39 UTC | 20 Jan 25 13:39 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-916430 stop                                                                   | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:39 UTC | 20 Jan 25 13:42 UTC |
	| start   | -p multinode-916430                                                                     | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:42 UTC | 20 Jan 25 13:44 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-916430                                                                | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:44 UTC |                     |
	| start   | -p multinode-916430-m02                                                                 | multinode-916430-m02 | jenkins | v1.35.0 | 20 Jan 25 13:44 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-916430-m03                                                                 | multinode-916430-m03 | jenkins | v1.35.0 | 20 Jan 25 13:44 UTC | 20 Jan 25 13:44 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-916430                                                                 | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:44 UTC |                     |
	| delete  | -p multinode-916430-m03                                                                 | multinode-916430-m03 | jenkins | v1.35.0 | 20 Jan 25 13:44 UTC | 20 Jan 25 13:44 UTC |
	| delete  | -p multinode-916430                                                                     | multinode-916430     | jenkins | v1.35.0 | 20 Jan 25 13:44 UTC | 20 Jan 25 13:44 UTC |
	| start   | -p test-preload-988350                                                                  | test-preload-988350  | jenkins | v1.35.0 | 20 Jan 25 13:44 UTC | 20 Jan 25 13:46 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-988350 image pull                                                          | test-preload-988350  | jenkins | v1.35.0 | 20 Jan 25 13:46 UTC | 20 Jan 25 13:46 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-988350                                                                  | test-preload-988350  | jenkins | v1.35.0 | 20 Jan 25 13:46 UTC | 20 Jan 25 13:46 UTC |
	| start   | -p test-preload-988350                                                                  | test-preload-988350  | jenkins | v1.35.0 | 20 Jan 25 13:46 UTC | 20 Jan 25 13:47 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-988350 image list                                                          | test-preload-988350  | jenkins | v1.35.0 | 20 Jan 25 13:47 UTC | 20 Jan 25 13:47 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 13:46:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 13:46:35.050034 1958891 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:46:35.050182 1958891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:46:35.050196 1958891 out.go:358] Setting ErrFile to fd 2...
	I0120 13:46:35.050202 1958891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:46:35.050655 1958891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 13:46:35.051249 1958891 out.go:352] Setting JSON to false
	I0120 13:46:35.052295 1958891 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":19741,"bootTime":1737361054,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 13:46:35.052418 1958891 start.go:139] virtualization: kvm guest
	I0120 13:46:35.054856 1958891 out.go:177] * [test-preload-988350] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 13:46:35.056287 1958891 notify.go:220] Checking for updates...
	I0120 13:46:35.056318 1958891 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 13:46:35.057790 1958891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 13:46:35.059213 1958891 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 13:46:35.060574 1958891 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 13:46:35.061827 1958891 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 13:46:35.063200 1958891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 13:46:35.064907 1958891 config.go:182] Loaded profile config "test-preload-988350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0120 13:46:35.065291 1958891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:46:35.065349 1958891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:46:35.081229 1958891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45467
	I0120 13:46:35.081800 1958891 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:46:35.082397 1958891 main.go:141] libmachine: Using API Version  1
	I0120 13:46:35.082418 1958891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:46:35.082843 1958891 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:46:35.083010 1958891 main.go:141] libmachine: (test-preload-988350) Calling .DriverName
	I0120 13:46:35.084885 1958891 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I0120 13:46:35.086242 1958891 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 13:46:35.086564 1958891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:46:35.086627 1958891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:46:35.101529 1958891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37029
	I0120 13:46:35.102147 1958891 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:46:35.102746 1958891 main.go:141] libmachine: Using API Version  1
	I0120 13:46:35.102767 1958891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:46:35.103148 1958891 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:46:35.103367 1958891 main.go:141] libmachine: (test-preload-988350) Calling .DriverName
	I0120 13:46:35.139520 1958891 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 13:46:35.140802 1958891 start.go:297] selected driver: kvm2
	I0120 13:46:35.140814 1958891 start.go:901] validating driver "kvm2" against &{Name:test-preload-988350 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-988350
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:46:35.140907 1958891 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 13:46:35.141716 1958891 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:46:35.141803 1958891 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-1920423/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 13:46:35.158340 1958891 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 13:46:35.158747 1958891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 13:46:35.158785 1958891 cni.go:84] Creating CNI manager for ""
	I0120 13:46:35.158838 1958891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 13:46:35.158897 1958891 start.go:340] cluster config:
	{Name:test-preload-988350 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-988350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:46:35.159011 1958891 iso.go:125] acquiring lock: {Name:mk18c81b0efa5cc8efe9d47dc52684752f206ae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:46:35.161235 1958891 out.go:177] * Starting "test-preload-988350" primary control-plane node in "test-preload-988350" cluster
	I0120 13:46:35.162679 1958891 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0120 13:46:35.187797 1958891 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0120 13:46:35.187836 1958891 cache.go:56] Caching tarball of preloaded images
	I0120 13:46:35.188060 1958891 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0120 13:46:35.189853 1958891 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0120 13:46:35.191168 1958891 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0120 13:46:35.219758 1958891 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0120 13:46:37.956288 1958891 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0120 13:46:37.956390 1958891 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0120 13:46:38.831715 1958891 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0120 13:46:38.831854 1958891 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/test-preload-988350/config.json ...
	I0120 13:46:38.832091 1958891 start.go:360] acquireMachinesLock for test-preload-988350: {Name:mkbf148c6fec0c722eed081be3cef9d0990100c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 13:46:38.832161 1958891 start.go:364] duration metric: took 47.196µs to acquireMachinesLock for "test-preload-988350"
	I0120 13:46:38.832176 1958891 start.go:96] Skipping create...Using existing machine configuration
	I0120 13:46:38.832183 1958891 fix.go:54] fixHost starting: 
	I0120 13:46:38.832440 1958891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:46:38.832476 1958891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:46:38.848091 1958891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41443
	I0120 13:46:38.848661 1958891 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:46:38.849269 1958891 main.go:141] libmachine: Using API Version  1
	I0120 13:46:38.849300 1958891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:46:38.849795 1958891 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:46:38.850046 1958891 main.go:141] libmachine: (test-preload-988350) Calling .DriverName
	I0120 13:46:38.850220 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetState
	I0120 13:46:38.852169 1958891 fix.go:112] recreateIfNeeded on test-preload-988350: state=Stopped err=<nil>
	I0120 13:46:38.852204 1958891 main.go:141] libmachine: (test-preload-988350) Calling .DriverName
	W0120 13:46:38.852412 1958891 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 13:46:38.855183 1958891 out.go:177] * Restarting existing kvm2 VM for "test-preload-988350" ...
	I0120 13:46:38.856411 1958891 main.go:141] libmachine: (test-preload-988350) Calling .Start
	I0120 13:46:38.856698 1958891 main.go:141] libmachine: (test-preload-988350) starting domain...
	I0120 13:46:38.856719 1958891 main.go:141] libmachine: (test-preload-988350) ensuring networks are active...
	I0120 13:46:38.857523 1958891 main.go:141] libmachine: (test-preload-988350) Ensuring network default is active
	I0120 13:46:38.857831 1958891 main.go:141] libmachine: (test-preload-988350) Ensuring network mk-test-preload-988350 is active
	I0120 13:46:38.858114 1958891 main.go:141] libmachine: (test-preload-988350) getting domain XML...
	I0120 13:46:38.858912 1958891 main.go:141] libmachine: (test-preload-988350) creating domain...
	I0120 13:46:40.082418 1958891 main.go:141] libmachine: (test-preload-988350) waiting for IP...
	I0120 13:46:40.083459 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:40.083885 1958891 main.go:141] libmachine: (test-preload-988350) DBG | unable to find current IP address of domain test-preload-988350 in network mk-test-preload-988350
	I0120 13:46:40.084028 1958891 main.go:141] libmachine: (test-preload-988350) DBG | I0120 13:46:40.083894 1958943 retry.go:31] will retry after 189.019106ms: waiting for domain to come up
	I0120 13:46:40.274413 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:40.274956 1958891 main.go:141] libmachine: (test-preload-988350) DBG | unable to find current IP address of domain test-preload-988350 in network mk-test-preload-988350
	I0120 13:46:40.274990 1958891 main.go:141] libmachine: (test-preload-988350) DBG | I0120 13:46:40.274913 1958943 retry.go:31] will retry after 251.37656ms: waiting for domain to come up
	I0120 13:46:40.528667 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:40.529099 1958891 main.go:141] libmachine: (test-preload-988350) DBG | unable to find current IP address of domain test-preload-988350 in network mk-test-preload-988350
	I0120 13:46:40.529133 1958891 main.go:141] libmachine: (test-preload-988350) DBG | I0120 13:46:40.529026 1958943 retry.go:31] will retry after 389.17653ms: waiting for domain to come up
	I0120 13:46:40.919527 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:40.919937 1958891 main.go:141] libmachine: (test-preload-988350) DBG | unable to find current IP address of domain test-preload-988350 in network mk-test-preload-988350
	I0120 13:46:40.919985 1958891 main.go:141] libmachine: (test-preload-988350) DBG | I0120 13:46:40.919928 1958943 retry.go:31] will retry after 455.558661ms: waiting for domain to come up
	I0120 13:46:41.376653 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:41.377109 1958891 main.go:141] libmachine: (test-preload-988350) DBG | unable to find current IP address of domain test-preload-988350 in network mk-test-preload-988350
	I0120 13:46:41.377137 1958891 main.go:141] libmachine: (test-preload-988350) DBG | I0120 13:46:41.377038 1958943 retry.go:31] will retry after 531.281382ms: waiting for domain to come up
	I0120 13:46:41.909678 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:41.910132 1958891 main.go:141] libmachine: (test-preload-988350) DBG | unable to find current IP address of domain test-preload-988350 in network mk-test-preload-988350
	I0120 13:46:41.910158 1958891 main.go:141] libmachine: (test-preload-988350) DBG | I0120 13:46:41.910086 1958943 retry.go:31] will retry after 690.762191ms: waiting for domain to come up
	I0120 13:46:42.602262 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:42.602626 1958891 main.go:141] libmachine: (test-preload-988350) DBG | unable to find current IP address of domain test-preload-988350 in network mk-test-preload-988350
	I0120 13:46:42.602654 1958891 main.go:141] libmachine: (test-preload-988350) DBG | I0120 13:46:42.602582 1958943 retry.go:31] will retry after 1.126753963s: waiting for domain to come up
	I0120 13:46:43.730965 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:43.731387 1958891 main.go:141] libmachine: (test-preload-988350) DBG | unable to find current IP address of domain test-preload-988350 in network mk-test-preload-988350
	I0120 13:46:43.731414 1958891 main.go:141] libmachine: (test-preload-988350) DBG | I0120 13:46:43.731345 1958943 retry.go:31] will retry after 1.139611013s: waiting for domain to come up
	I0120 13:46:44.873244 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:44.873683 1958891 main.go:141] libmachine: (test-preload-988350) DBG | unable to find current IP address of domain test-preload-988350 in network mk-test-preload-988350
	I0120 13:46:44.873709 1958891 main.go:141] libmachine: (test-preload-988350) DBG | I0120 13:46:44.873640 1958943 retry.go:31] will retry after 1.836801997s: waiting for domain to come up
	I0120 13:46:46.712907 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:46.713488 1958891 main.go:141] libmachine: (test-preload-988350) DBG | unable to find current IP address of domain test-preload-988350 in network mk-test-preload-988350
	I0120 13:46:46.713524 1958891 main.go:141] libmachine: (test-preload-988350) DBG | I0120 13:46:46.713423 1958943 retry.go:31] will retry after 2.221266774s: waiting for domain to come up
	I0120 13:46:48.936295 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:48.936681 1958891 main.go:141] libmachine: (test-preload-988350) DBG | unable to find current IP address of domain test-preload-988350 in network mk-test-preload-988350
	I0120 13:46:48.936711 1958891 main.go:141] libmachine: (test-preload-988350) DBG | I0120 13:46:48.936649 1958943 retry.go:31] will retry after 1.817218629s: waiting for domain to come up
	I0120 13:46:50.756809 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:50.757426 1958891 main.go:141] libmachine: (test-preload-988350) DBG | unable to find current IP address of domain test-preload-988350 in network mk-test-preload-988350
	I0120 13:46:50.757454 1958891 main.go:141] libmachine: (test-preload-988350) DBG | I0120 13:46:50.757332 1958943 retry.go:31] will retry after 3.621255567s: waiting for domain to come up
	I0120 13:46:54.380636 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:54.381133 1958891 main.go:141] libmachine: (test-preload-988350) DBG | unable to find current IP address of domain test-preload-988350 in network mk-test-preload-988350
	I0120 13:46:54.381159 1958891 main.go:141] libmachine: (test-preload-988350) DBG | I0120 13:46:54.381065 1958943 retry.go:31] will retry after 3.019218334s: waiting for domain to come up
	I0120 13:46:57.404323 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:57.404792 1958891 main.go:141] libmachine: (test-preload-988350) found domain IP: 192.168.39.87
	I0120 13:46:57.404814 1958891 main.go:141] libmachine: (test-preload-988350) reserving static IP address...
	I0120 13:46:57.404843 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has current primary IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:57.405327 1958891 main.go:141] libmachine: (test-preload-988350) reserved static IP address 192.168.39.87 for domain test-preload-988350
	I0120 13:46:57.405350 1958891 main.go:141] libmachine: (test-preload-988350) waiting for SSH...
	I0120 13:46:57.405375 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "test-preload-988350", mac: "52:54:00:a0:26:e1", ip: "192.168.39.87"} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:46:57.405403 1958891 main.go:141] libmachine: (test-preload-988350) DBG | skip adding static IP to network mk-test-preload-988350 - found existing host DHCP lease matching {name: "test-preload-988350", mac: "52:54:00:a0:26:e1", ip: "192.168.39.87"}
	I0120 13:46:57.405419 1958891 main.go:141] libmachine: (test-preload-988350) DBG | Getting to WaitForSSH function...
	I0120 13:46:57.407596 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:57.407902 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:46:57.407936 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:57.408072 1958891 main.go:141] libmachine: (test-preload-988350) DBG | Using SSH client type: external
	I0120 13:46:57.408101 1958891 main.go:141] libmachine: (test-preload-988350) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/test-preload-988350/id_rsa (-rw-------)
	I0120 13:46:57.408123 1958891 main.go:141] libmachine: (test-preload-988350) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/test-preload-988350/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 13:46:57.408133 1958891 main.go:141] libmachine: (test-preload-988350) DBG | About to run SSH command:
	I0120 13:46:57.408141 1958891 main.go:141] libmachine: (test-preload-988350) DBG | exit 0
	I0120 13:46:57.535437 1958891 main.go:141] libmachine: (test-preload-988350) DBG | SSH cmd err, output: <nil>: 
	I0120 13:46:57.535809 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetConfigRaw
	I0120 13:46:57.536552 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetIP
	I0120 13:46:57.539491 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:57.539958 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:46:57.540006 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:57.540326 1958891 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/test-preload-988350/config.json ...
	I0120 13:46:57.540544 1958891 machine.go:93] provisionDockerMachine start ...
	I0120 13:46:57.540565 1958891 main.go:141] libmachine: (test-preload-988350) Calling .DriverName
	I0120 13:46:57.540845 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHHostname
	I0120 13:46:57.543328 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:57.543702 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:46:57.543737 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:57.543887 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHPort
	I0120 13:46:57.544095 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHKeyPath
	I0120 13:46:57.544280 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHKeyPath
	I0120 13:46:57.544440 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHUsername
	I0120 13:46:57.544659 1958891 main.go:141] libmachine: Using SSH client type: native
	I0120 13:46:57.544876 1958891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0120 13:46:57.544889 1958891 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 13:46:57.655645 1958891 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 13:46:57.655683 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetMachineName
	I0120 13:46:57.655984 1958891 buildroot.go:166] provisioning hostname "test-preload-988350"
	I0120 13:46:57.656054 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetMachineName
	I0120 13:46:57.656286 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHHostname
	I0120 13:46:57.659449 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:57.659895 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:46:57.659937 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:57.660136 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHPort
	I0120 13:46:57.660371 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHKeyPath
	I0120 13:46:57.660528 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHKeyPath
	I0120 13:46:57.660656 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHUsername
	I0120 13:46:57.660807 1958891 main.go:141] libmachine: Using SSH client type: native
	I0120 13:46:57.661003 1958891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0120 13:46:57.661019 1958891 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-988350 && echo "test-preload-988350" | sudo tee /etc/hostname
	I0120 13:46:57.785781 1958891 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-988350
	
	I0120 13:46:57.785813 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHHostname
	I0120 13:46:57.788715 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:57.789061 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:46:57.789098 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:57.789286 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHPort
	I0120 13:46:57.789508 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHKeyPath
	I0120 13:46:57.789728 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHKeyPath
	I0120 13:46:57.789877 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHUsername
	I0120 13:46:57.790086 1958891 main.go:141] libmachine: Using SSH client type: native
	I0120 13:46:57.790319 1958891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0120 13:46:57.790356 1958891 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-988350' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-988350/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-988350' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 13:46:57.908419 1958891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 13:46:57.908461 1958891 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 13:46:57.908510 1958891 buildroot.go:174] setting up certificates
	I0120 13:46:57.908523 1958891 provision.go:84] configureAuth start
	I0120 13:46:57.908538 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetMachineName
	I0120 13:46:57.908862 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetIP
	I0120 13:46:57.911717 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:57.912125 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:46:57.912166 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:57.912323 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHHostname
	I0120 13:46:57.914639 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:57.914917 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:46:57.914950 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:57.915052 1958891 provision.go:143] copyHostCerts
	I0120 13:46:57.915114 1958891 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem, removing ...
	I0120 13:46:57.915135 1958891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem
	I0120 13:46:57.915213 1958891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 13:46:57.915336 1958891 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem, removing ...
	I0120 13:46:57.915348 1958891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem
	I0120 13:46:57.915385 1958891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 13:46:57.915467 1958891 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem, removing ...
	I0120 13:46:57.915477 1958891 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem
	I0120 13:46:57.915510 1958891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 13:46:57.915587 1958891 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.test-preload-988350 san=[127.0.0.1 192.168.39.87 localhost minikube test-preload-988350]
	I0120 13:46:58.081407 1958891 provision.go:177] copyRemoteCerts
	I0120 13:46:58.081474 1958891 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 13:46:58.081502 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHHostname
	I0120 13:46:58.084527 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:58.084798 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:46:58.084837 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:58.085046 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHPort
	I0120 13:46:58.085233 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHKeyPath
	I0120 13:46:58.085371 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHUsername
	I0120 13:46:58.085509 1958891 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/test-preload-988350/id_rsa Username:docker}
	I0120 13:46:58.169516 1958891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 13:46:58.196216 1958891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0120 13:46:58.222972 1958891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 13:46:58.248905 1958891 provision.go:87] duration metric: took 340.36574ms to configureAuth
	I0120 13:46:58.248936 1958891 buildroot.go:189] setting minikube options for container-runtime
	I0120 13:46:58.249129 1958891 config.go:182] Loaded profile config "test-preload-988350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0120 13:46:58.249211 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHHostname
	I0120 13:46:58.252312 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:58.252783 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:46:58.252816 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:58.253010 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHPort
	I0120 13:46:58.253258 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHKeyPath
	I0120 13:46:58.253470 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHKeyPath
	I0120 13:46:58.253636 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHUsername
	I0120 13:46:58.253801 1958891 main.go:141] libmachine: Using SSH client type: native
	I0120 13:46:58.254020 1958891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0120 13:46:58.254040 1958891 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 13:46:58.487837 1958891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 13:46:58.487871 1958891 machine.go:96] duration metric: took 947.313301ms to provisionDockerMachine
	I0120 13:46:58.487885 1958891 start.go:293] postStartSetup for "test-preload-988350" (driver="kvm2")
	I0120 13:46:58.487896 1958891 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 13:46:58.487913 1958891 main.go:141] libmachine: (test-preload-988350) Calling .DriverName
	I0120 13:46:58.488277 1958891 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 13:46:58.488314 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHHostname
	I0120 13:46:58.491225 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:58.491589 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:46:58.491619 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:58.491727 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHPort
	I0120 13:46:58.491926 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHKeyPath
	I0120 13:46:58.492061 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHUsername
	I0120 13:46:58.492190 1958891 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/test-preload-988350/id_rsa Username:docker}
	I0120 13:46:58.578624 1958891 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 13:46:58.583021 1958891 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 13:46:58.583054 1958891 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 13:46:58.583132 1958891 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 13:46:58.583268 1958891 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem -> 19276722.pem in /etc/ssl/certs
	I0120 13:46:58.583399 1958891 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 13:46:58.593997 1958891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 13:46:58.618718 1958891 start.go:296] duration metric: took 130.812716ms for postStartSetup
	I0120 13:46:58.618772 1958891 fix.go:56] duration metric: took 19.786588208s for fixHost
	I0120 13:46:58.618799 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHHostname
	I0120 13:46:58.621384 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:58.621766 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:46:58.621802 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:58.621895 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHPort
	I0120 13:46:58.622129 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHKeyPath
	I0120 13:46:58.622295 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHKeyPath
	I0120 13:46:58.622473 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHUsername
	I0120 13:46:58.622658 1958891 main.go:141] libmachine: Using SSH client type: native
	I0120 13:46:58.622861 1958891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0120 13:46:58.622875 1958891 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 13:46:58.731867 1958891 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737380818.687405240
	
	I0120 13:46:58.731897 1958891 fix.go:216] guest clock: 1737380818.687405240
	I0120 13:46:58.731905 1958891 fix.go:229] Guest: 2025-01-20 13:46:58.68740524 +0000 UTC Remote: 2025-01-20 13:46:58.618777033 +0000 UTC m=+23.608236189 (delta=68.628207ms)
	I0120 13:46:58.731926 1958891 fix.go:200] guest clock delta is within tolerance: 68.628207ms
	I0120 13:46:58.731932 1958891 start.go:83] releasing machines lock for "test-preload-988350", held for 19.899761678s
	I0120 13:46:58.731956 1958891 main.go:141] libmachine: (test-preload-988350) Calling .DriverName
	I0120 13:46:58.732227 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetIP
	I0120 13:46:58.735014 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:58.735437 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:46:58.735506 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:58.735627 1958891 main.go:141] libmachine: (test-preload-988350) Calling .DriverName
	I0120 13:46:58.736092 1958891 main.go:141] libmachine: (test-preload-988350) Calling .DriverName
	I0120 13:46:58.736286 1958891 main.go:141] libmachine: (test-preload-988350) Calling .DriverName
	I0120 13:46:58.736421 1958891 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 13:46:58.736474 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHHostname
	I0120 13:46:58.736480 1958891 ssh_runner.go:195] Run: cat /version.json
	I0120 13:46:58.736501 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHHostname
	I0120 13:46:58.739442 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:58.739621 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:58.739849 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:46:58.739881 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:58.740053 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHPort
	I0120 13:46:58.740163 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:46:58.740192 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:46:58.740250 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHKeyPath
	I0120 13:46:58.740381 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHPort
	I0120 13:46:58.740463 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHUsername
	I0120 13:46:58.740520 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHKeyPath
	I0120 13:46:58.740599 1958891 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/test-preload-988350/id_rsa Username:docker}
	I0120 13:46:58.740645 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHUsername
	I0120 13:46:58.740783 1958891 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/test-preload-988350/id_rsa Username:docker}
	I0120 13:46:58.848187 1958891 ssh_runner.go:195] Run: systemctl --version
	I0120 13:46:58.854745 1958891 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 13:46:59.005761 1958891 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 13:46:59.012176 1958891 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 13:46:59.012250 1958891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 13:46:59.030510 1958891 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 13:46:59.030545 1958891 start.go:495] detecting cgroup driver to use...
	I0120 13:46:59.030643 1958891 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 13:46:59.049926 1958891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 13:46:59.064051 1958891 docker.go:217] disabling cri-docker service (if available) ...
	I0120 13:46:59.064122 1958891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 13:46:59.078665 1958891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 13:46:59.093542 1958891 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 13:46:59.205764 1958891 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 13:46:59.349710 1958891 docker.go:233] disabling docker service ...
	I0120 13:46:59.349776 1958891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 13:46:59.365502 1958891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 13:46:59.379791 1958891 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 13:46:59.512308 1958891 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 13:46:59.622357 1958891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 13:46:59.637965 1958891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 13:46:59.657482 1958891 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0120 13:46:59.657567 1958891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:46:59.668500 1958891 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 13:46:59.668576 1958891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:46:59.679650 1958891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:46:59.691508 1958891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:46:59.702169 1958891 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 13:46:59.713194 1958891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:46:59.724210 1958891 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:46:59.742880 1958891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:46:59.753635 1958891 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 13:46:59.763424 1958891 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 13:46:59.763502 1958891 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 13:46:59.778543 1958891 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 13:46:59.788245 1958891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 13:46:59.901987 1958891 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 13:46:59.997734 1958891 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 13:46:59.997816 1958891 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 13:47:00.004020 1958891 start.go:563] Will wait 60s for crictl version
	I0120 13:47:00.004109 1958891 ssh_runner.go:195] Run: which crictl
	I0120 13:47:00.008156 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 13:47:00.055212 1958891 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 13:47:00.055309 1958891 ssh_runner.go:195] Run: crio --version
	I0120 13:47:00.084421 1958891 ssh_runner.go:195] Run: crio --version
	I0120 13:47:00.115183 1958891 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0120 13:47:00.116591 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetIP
	I0120 13:47:00.119291 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:47:00.119648 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:47:00.119684 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:47:00.119903 1958891 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 13:47:00.124436 1958891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 13:47:00.137428 1958891 kubeadm.go:883] updating cluster {Name:test-preload-988350 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-988350 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 13:47:00.137612 1958891 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0120 13:47:00.137662 1958891 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 13:47:00.175666 1958891 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0120 13:47:00.175758 1958891 ssh_runner.go:195] Run: which lz4
	I0120 13:47:00.179944 1958891 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 13:47:00.184272 1958891 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 13:47:00.184303 1958891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0120 13:47:01.856729 1958891 crio.go:462] duration metric: took 1.676816954s to copy over tarball
	I0120 13:47:01.856817 1958891 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 13:47:04.345619 1958891 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.488767394s)
	I0120 13:47:04.345668 1958891 crio.go:469] duration metric: took 2.488902478s to extract the tarball
	I0120 13:47:04.345679 1958891 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 13:47:04.387832 1958891 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 13:47:04.430565 1958891 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0120 13:47:04.430595 1958891 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 13:47:04.430711 1958891 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:47:04.430718 1958891 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0120 13:47:04.430730 1958891 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0120 13:47:04.430808 1958891 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0120 13:47:04.430815 1958891 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0120 13:47:04.430819 1958891 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0120 13:47:04.430745 1958891 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0120 13:47:04.430774 1958891 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 13:47:04.432498 1958891 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0120 13:47:04.432615 1958891 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:47:04.432645 1958891 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 13:47:04.432649 1958891 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0120 13:47:04.432645 1958891 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0120 13:47:04.432722 1958891 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0120 13:47:04.432649 1958891 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0120 13:47:04.432927 1958891 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0120 13:47:04.587298 1958891 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0120 13:47:04.587665 1958891 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 13:47:04.597077 1958891 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0120 13:47:04.597842 1958891 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0120 13:47:04.599377 1958891 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0120 13:47:04.604468 1958891 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0120 13:47:04.634143 1958891 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0120 13:47:04.683013 1958891 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0120 13:47:04.683079 1958891 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0120 13:47:04.683134 1958891 ssh_runner.go:195] Run: which crictl
	I0120 13:47:04.739804 1958891 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0120 13:47:04.739858 1958891 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 13:47:04.739903 1958891 ssh_runner.go:195] Run: which crictl
	I0120 13:47:04.748500 1958891 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0120 13:47:04.748549 1958891 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0120 13:47:04.748591 1958891 ssh_runner.go:195] Run: which crictl
	I0120 13:47:04.769673 1958891 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0120 13:47:04.769726 1958891 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0120 13:47:04.769771 1958891 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0120 13:47:04.769785 1958891 ssh_runner.go:195] Run: which crictl
	I0120 13:47:04.769808 1958891 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0120 13:47:04.769829 1958891 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0120 13:47:04.769865 1958891 ssh_runner.go:195] Run: which crictl
	I0120 13:47:04.769873 1958891 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0120 13:47:04.769913 1958891 ssh_runner.go:195] Run: which crictl
	I0120 13:47:04.787953 1958891 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0120 13:47:04.787995 1958891 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0120 13:47:04.788059 1958891 ssh_runner.go:195] Run: which crictl
	I0120 13:47:04.788061 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0120 13:47:04.788102 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 13:47:04.788142 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0120 13:47:04.788165 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0120 13:47:04.788218 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0120 13:47:04.788249 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0120 13:47:04.894220 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0120 13:47:04.897908 1958891 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:47:04.933007 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0120 13:47:04.939198 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0120 13:47:04.939211 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0120 13:47:04.939321 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0120 13:47:04.939388 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0120 13:47:04.939444 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 13:47:05.045276 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0120 13:47:05.247242 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0120 13:47:05.247393 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0120 13:47:05.247450 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0120 13:47:05.247539 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0120 13:47:05.247620 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0120 13:47:05.247712 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0120 13:47:05.247751 1958891 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0120 13:47:05.247866 1958891 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0120 13:47:05.401853 1958891 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0120 13:47:05.401933 1958891 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0120 13:47:05.402054 1958891 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0120 13:47:05.402098 1958891 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0120 13:47:05.402108 1958891 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0120 13:47:05.402172 1958891 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0120 13:47:05.402180 1958891 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0120 13:47:05.402261 1958891 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0120 13:47:05.402265 1958891 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0120 13:47:05.402353 1958891 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0120 13:47:05.402421 1958891 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0120 13:47:05.402427 1958891 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0120 13:47:05.402434 1958891 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0120 13:47:05.402464 1958891 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0120 13:47:05.479185 1958891 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0120 13:47:05.479238 1958891 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0120 13:47:05.479285 1958891 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0120 13:47:05.479320 1958891 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0120 13:47:05.479363 1958891 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0120 13:47:05.479418 1958891 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0120 13:47:05.479446 1958891 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0120 13:47:07.784842 1958891 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.382349s)
	I0120 13:47:07.784883 1958891 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0120 13:47:07.784907 1958891 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.305565304s)
	I0120 13:47:07.784924 1958891 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0120 13:47:07.784952 1958891 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0120 13:47:07.785003 1958891 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0120 13:47:09.940567 1958891 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.155522924s)
	I0120 13:47:09.940613 1958891 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0120 13:47:09.940650 1958891 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0120 13:47:09.940735 1958891 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0120 13:47:10.689218 1958891 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0120 13:47:10.689289 1958891 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0120 13:47:10.689373 1958891 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0120 13:47:11.140962 1958891 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0120 13:47:11.141019 1958891 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0120 13:47:11.141085 1958891 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0120 13:47:11.488838 1958891 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0120 13:47:11.488899 1958891 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0120 13:47:11.488948 1958891 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0120 13:47:12.233769 1958891 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0120 13:47:12.233823 1958891 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0120 13:47:12.233869 1958891 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0120 13:47:13.076767 1958891 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0120 13:47:13.076826 1958891 cache_images.go:123] Successfully loaded all cached images
	I0120 13:47:13.076834 1958891 cache_images.go:92] duration metric: took 8.646208719s to LoadCachedImages
	I0120 13:47:13.076853 1958891 kubeadm.go:934] updating node { 192.168.39.87 8443 v1.24.4 crio true true} ...
	I0120 13:47:13.077012 1958891 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-988350 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-988350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 13:47:13.077109 1958891 ssh_runner.go:195] Run: crio config
	I0120 13:47:13.132361 1958891 cni.go:84] Creating CNI manager for ""
	I0120 13:47:13.132383 1958891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 13:47:13.132394 1958891 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 13:47:13.132421 1958891 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.87 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-988350 NodeName:test-preload-988350 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.87"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 13:47:13.132617 1958891 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.87
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-988350"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.87"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 13:47:13.132719 1958891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0120 13:47:13.142776 1958891 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 13:47:13.142868 1958891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 13:47:13.152658 1958891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0120 13:47:13.170120 1958891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 13:47:13.187590 1958891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0120 13:47:13.205806 1958891 ssh_runner.go:195] Run: grep 192.168.39.87	control-plane.minikube.internal$ /etc/hosts
	I0120 13:47:13.210105 1958891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.87	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 13:47:13.223464 1958891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 13:47:13.337351 1958891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 13:47:13.356076 1958891 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/test-preload-988350 for IP: 192.168.39.87
	I0120 13:47:13.356134 1958891 certs.go:194] generating shared ca certs ...
	I0120 13:47:13.356162 1958891 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:47:13.356382 1958891 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 13:47:13.356459 1958891 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 13:47:13.356478 1958891 certs.go:256] generating profile certs ...
	I0120 13:47:13.356605 1958891 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/test-preload-988350/client.key
	I0120 13:47:13.356693 1958891 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/test-preload-988350/apiserver.key.d36f74aa
	I0120 13:47:13.356753 1958891 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/test-preload-988350/proxy-client.key
	I0120 13:47:13.356940 1958891 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem (1338 bytes)
	W0120 13:47:13.357004 1958891 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672_empty.pem, impossibly tiny 0 bytes
	I0120 13:47:13.357021 1958891 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 13:47:13.357061 1958891 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 13:47:13.357101 1958891 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 13:47:13.357136 1958891 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 13:47:13.357188 1958891 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 13:47:13.358121 1958891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 13:47:13.396886 1958891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 13:47:13.443505 1958891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 13:47:13.500000 1958891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 13:47:13.532779 1958891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/test-preload-988350/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0120 13:47:13.562405 1958891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/test-preload-988350/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 13:47:13.597269 1958891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/test-preload-988350/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 13:47:13.623507 1958891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/test-preload-988350/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 13:47:13.648978 1958891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem --> /usr/share/ca-certificates/1927672.pem (1338 bytes)
	I0120 13:47:13.677392 1958891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /usr/share/ca-certificates/19276722.pem (1708 bytes)
	I0120 13:47:13.705092 1958891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 13:47:13.733454 1958891 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 13:47:13.753810 1958891 ssh_runner.go:195] Run: openssl version
	I0120 13:47:13.759928 1958891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1927672.pem && ln -fs /usr/share/ca-certificates/1927672.pem /etc/ssl/certs/1927672.pem"
	I0120 13:47:13.771204 1958891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1927672.pem
	I0120 13:47:13.776189 1958891 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:58 /usr/share/ca-certificates/1927672.pem
	I0120 13:47:13.776266 1958891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1927672.pem
	I0120 13:47:13.782396 1958891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1927672.pem /etc/ssl/certs/51391683.0"
	I0120 13:47:13.793424 1958891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19276722.pem && ln -fs /usr/share/ca-certificates/19276722.pem /etc/ssl/certs/19276722.pem"
	I0120 13:47:13.804592 1958891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19276722.pem
	I0120 13:47:13.809353 1958891 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:58 /usr/share/ca-certificates/19276722.pem
	I0120 13:47:13.809420 1958891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19276722.pem
	I0120 13:47:13.815423 1958891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19276722.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 13:47:13.826834 1958891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 13:47:13.838357 1958891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:47:13.843753 1958891 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:47:13.843823 1958891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:47:13.850256 1958891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 13:47:13.861934 1958891 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 13:47:13.866956 1958891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 13:47:13.873473 1958891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 13:47:13.879947 1958891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 13:47:13.886639 1958891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 13:47:13.892838 1958891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 13:47:13.899468 1958891 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 13:47:13.906077 1958891 kubeadm.go:392] StartCluster: {Name:test-preload-988350 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-988350 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:47:13.906172 1958891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 13:47:13.906239 1958891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 13:47:13.955268 1958891 cri.go:89] found id: ""
	I0120 13:47:13.955346 1958891 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 13:47:13.966727 1958891 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 13:47:13.966749 1958891 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 13:47:13.966798 1958891 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 13:47:13.977978 1958891 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 13:47:13.978487 1958891 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-988350" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 13:47:13.978588 1958891 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-1920423/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-988350" cluster setting kubeconfig missing "test-preload-988350" context setting]
	I0120 13:47:13.978881 1958891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:47:13.979481 1958891 kapi.go:59] client config for test-preload-988350: &rest.Config{Host:"https://192.168.39.87:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/test-preload-988350/client.crt", KeyFile:"/home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/test-preload-988350/client.key", CAFile:"/home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243bda0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0120 13:47:13.980214 1958891 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 13:47:13.991265 1958891 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.87
	I0120 13:47:13.991305 1958891 kubeadm.go:1160] stopping kube-system containers ...
	I0120 13:47:13.991321 1958891 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 13:47:13.991381 1958891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 13:47:14.034747 1958891 cri.go:89] found id: ""
	I0120 13:47:14.034823 1958891 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 13:47:14.054514 1958891 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 13:47:14.066046 1958891 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 13:47:14.066077 1958891 kubeadm.go:157] found existing configuration files:
	
	I0120 13:47:14.066134 1958891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 13:47:14.077090 1958891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 13:47:14.077159 1958891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 13:47:14.088233 1958891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 13:47:14.098882 1958891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 13:47:14.098953 1958891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 13:47:14.110128 1958891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 13:47:14.120333 1958891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 13:47:14.120419 1958891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 13:47:14.131147 1958891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 13:47:14.141073 1958891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 13:47:14.141147 1958891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 13:47:14.151769 1958891 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 13:47:14.161758 1958891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 13:47:14.272687 1958891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 13:47:14.961971 1958891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 13:47:15.234264 1958891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 13:47:15.323551 1958891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 13:47:15.415978 1958891 api_server.go:52] waiting for apiserver process to appear ...
	I0120 13:47:15.416105 1958891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 13:47:15.916848 1958891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 13:47:16.416779 1958891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 13:47:16.459627 1958891 api_server.go:72] duration metric: took 1.043650205s to wait for apiserver process to appear ...
	I0120 13:47:16.459658 1958891 api_server.go:88] waiting for apiserver healthz status ...
	I0120 13:47:16.459678 1958891 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0120 13:47:16.460193 1958891 api_server.go:269] stopped: https://192.168.39.87:8443/healthz: Get "https://192.168.39.87:8443/healthz": dial tcp 192.168.39.87:8443: connect: connection refused
	I0120 13:47:16.959878 1958891 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0120 13:47:16.960604 1958891 api_server.go:269] stopped: https://192.168.39.87:8443/healthz: Get "https://192.168.39.87:8443/healthz": dial tcp 192.168.39.87:8443: connect: connection refused
	I0120 13:47:17.460301 1958891 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0120 13:47:20.398570 1958891 api_server.go:279] https://192.168.39.87:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 13:47:20.398629 1958891 api_server.go:103] status: https://192.168.39.87:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 13:47:20.398651 1958891 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0120 13:47:20.429525 1958891 api_server.go:279] https://192.168.39.87:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 13:47:20.429565 1958891 api_server.go:103] status: https://192.168.39.87:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 13:47:20.459815 1958891 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0120 13:47:20.485502 1958891 api_server.go:279] https://192.168.39.87:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 13:47:20.485549 1958891 api_server.go:103] status: https://192.168.39.87:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 13:47:20.960256 1958891 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0120 13:47:20.966016 1958891 api_server.go:279] https://192.168.39.87:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 13:47:20.966065 1958891 api_server.go:103] status: https://192.168.39.87:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 13:47:21.459735 1958891 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0120 13:47:21.467129 1958891 api_server.go:279] https://192.168.39.87:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 13:47:21.467167 1958891 api_server.go:103] status: https://192.168.39.87:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 13:47:21.959824 1958891 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0120 13:47:21.965542 1958891 api_server.go:279] https://192.168.39.87:8443/healthz returned 200:
	ok
	I0120 13:47:21.974668 1958891 api_server.go:141] control plane version: v1.24.4
	I0120 13:47:21.974700 1958891 api_server.go:131] duration metric: took 5.515033373s to wait for apiserver health ...
	I0120 13:47:21.974712 1958891 cni.go:84] Creating CNI manager for ""
	I0120 13:47:21.974721 1958891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 13:47:21.976536 1958891 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 13:47:21.977697 1958891 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 13:47:21.988899 1958891 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 13:47:22.019111 1958891 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 13:47:22.019235 1958891 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0120 13:47:22.019259 1958891 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0120 13:47:22.034184 1958891 system_pods.go:59] 7 kube-system pods found
	I0120 13:47:22.034234 1958891 system_pods.go:61] "coredns-6d4b75cb6d-z8zf7" [ed1e691a-b489-4aa6-86bd-b33c763327b1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 13:47:22.034254 1958891 system_pods.go:61] "etcd-test-preload-988350" [b8fb701d-d347-4d18-864a-42631476e07c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 13:47:22.034261 1958891 system_pods.go:61] "kube-apiserver-test-preload-988350" [e53662ef-dbb0-43eb-a3cd-e34b93ec2e84] Running
	I0120 13:47:22.034270 1958891 system_pods.go:61] "kube-controller-manager-test-preload-988350" [7c2b9384-405a-49ff-a45c-91762d57c5f2] Running
	I0120 13:47:22.034276 1958891 system_pods.go:61] "kube-proxy-ngvnk" [f943b93d-6df2-408a-8192-471496875019] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0120 13:47:22.034285 1958891 system_pods.go:61] "kube-scheduler-test-preload-988350" [9d6aedb0-e187-41b3-96d5-b3c4277e3987] Running
	I0120 13:47:22.034304 1958891 system_pods.go:61] "storage-provisioner" [2d1d9ca0-e2c6-4b87-9c01-d72721f1262d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 13:47:22.034313 1958891 system_pods.go:74] duration metric: took 15.170368ms to wait for pod list to return data ...
	I0120 13:47:22.034335 1958891 node_conditions.go:102] verifying NodePressure condition ...
	I0120 13:47:22.037921 1958891 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 13:47:22.037957 1958891 node_conditions.go:123] node cpu capacity is 2
	I0120 13:47:22.037973 1958891 node_conditions.go:105] duration metric: took 3.631234ms to run NodePressure ...
	I0120 13:47:22.037996 1958891 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 13:47:22.242271 1958891 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0120 13:47:22.248374 1958891 kubeadm.go:739] kubelet initialised
	I0120 13:47:22.248403 1958891 kubeadm.go:740] duration metric: took 6.098864ms waiting for restarted kubelet to initialise ...
	I0120 13:47:22.248414 1958891 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 13:47:22.254055 1958891 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-z8zf7" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:22.260103 1958891 pod_ready.go:98] node "test-preload-988350" hosting pod "coredns-6d4b75cb6d-z8zf7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988350" has status "Ready":"False"
	I0120 13:47:22.260132 1958891 pod_ready.go:82] duration metric: took 6.040554ms for pod "coredns-6d4b75cb6d-z8zf7" in "kube-system" namespace to be "Ready" ...
	E0120 13:47:22.260140 1958891 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-988350" hosting pod "coredns-6d4b75cb6d-z8zf7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988350" has status "Ready":"False"
	I0120 13:47:22.260148 1958891 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-988350" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:22.270948 1958891 pod_ready.go:98] node "test-preload-988350" hosting pod "etcd-test-preload-988350" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988350" has status "Ready":"False"
	I0120 13:47:22.270980 1958891 pod_ready.go:82] duration metric: took 10.822593ms for pod "etcd-test-preload-988350" in "kube-system" namespace to be "Ready" ...
	E0120 13:47:22.270994 1958891 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-988350" hosting pod "etcd-test-preload-988350" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988350" has status "Ready":"False"
	I0120 13:47:22.271004 1958891 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-988350" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:22.277334 1958891 pod_ready.go:98] node "test-preload-988350" hosting pod "kube-apiserver-test-preload-988350" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988350" has status "Ready":"False"
	I0120 13:47:22.277367 1958891 pod_ready.go:82] duration metric: took 6.351096ms for pod "kube-apiserver-test-preload-988350" in "kube-system" namespace to be "Ready" ...
	E0120 13:47:22.277382 1958891 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-988350" hosting pod "kube-apiserver-test-preload-988350" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988350" has status "Ready":"False"
	I0120 13:47:22.277395 1958891 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-988350" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:22.423224 1958891 pod_ready.go:98] node "test-preload-988350" hosting pod "kube-controller-manager-test-preload-988350" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988350" has status "Ready":"False"
	I0120 13:47:22.423271 1958891 pod_ready.go:82] duration metric: took 145.853809ms for pod "kube-controller-manager-test-preload-988350" in "kube-system" namespace to be "Ready" ...
	E0120 13:47:22.423288 1958891 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-988350" hosting pod "kube-controller-manager-test-preload-988350" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988350" has status "Ready":"False"
	I0120 13:47:22.423299 1958891 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ngvnk" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:22.823787 1958891 pod_ready.go:98] node "test-preload-988350" hosting pod "kube-proxy-ngvnk" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988350" has status "Ready":"False"
	I0120 13:47:22.823820 1958891 pod_ready.go:82] duration metric: took 400.504401ms for pod "kube-proxy-ngvnk" in "kube-system" namespace to be "Ready" ...
	E0120 13:47:22.823833 1958891 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-988350" hosting pod "kube-proxy-ngvnk" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988350" has status "Ready":"False"
	I0120 13:47:22.823840 1958891 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-988350" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:23.222682 1958891 pod_ready.go:98] node "test-preload-988350" hosting pod "kube-scheduler-test-preload-988350" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988350" has status "Ready":"False"
	I0120 13:47:23.222713 1958891 pod_ready.go:82] duration metric: took 398.863204ms for pod "kube-scheduler-test-preload-988350" in "kube-system" namespace to be "Ready" ...
	E0120 13:47:23.222725 1958891 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-988350" hosting pod "kube-scheduler-test-preload-988350" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-988350" has status "Ready":"False"
	I0120 13:47:23.222736 1958891 pod_ready.go:39] duration metric: took 974.31142ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 13:47:23.222772 1958891 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 13:47:23.235319 1958891 ops.go:34] apiserver oom_adj: -16
	I0120 13:47:23.235352 1958891 kubeadm.go:597] duration metric: took 9.268595675s to restartPrimaryControlPlane
	I0120 13:47:23.235365 1958891 kubeadm.go:394] duration metric: took 9.329296161s to StartCluster
	I0120 13:47:23.235388 1958891 settings.go:142] acquiring lock: {Name:mka51395421b81550a6a78e164d8aa21a9ede347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:47:23.235485 1958891 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 13:47:23.236455 1958891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:47:23.236753 1958891 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 13:47:23.236822 1958891 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 13:47:23.236934 1958891 addons.go:69] Setting storage-provisioner=true in profile "test-preload-988350"
	I0120 13:47:23.236956 1958891 addons.go:238] Setting addon storage-provisioner=true in "test-preload-988350"
	W0120 13:47:23.236964 1958891 addons.go:247] addon storage-provisioner should already be in state true
	I0120 13:47:23.236974 1958891 addons.go:69] Setting default-storageclass=true in profile "test-preload-988350"
	I0120 13:47:23.237001 1958891 host.go:66] Checking if "test-preload-988350" exists ...
	I0120 13:47:23.237005 1958891 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-988350"
	I0120 13:47:23.237022 1958891 config.go:182] Loaded profile config "test-preload-988350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0120 13:47:23.237433 1958891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:47:23.237524 1958891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:47:23.237555 1958891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:47:23.237610 1958891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:47:23.239244 1958891 out.go:177] * Verifying Kubernetes components...
	I0120 13:47:23.240760 1958891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 13:47:23.254127 1958891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41629
	I0120 13:47:23.254290 1958891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34083
	I0120 13:47:23.254801 1958891 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:47:23.254802 1958891 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:47:23.255318 1958891 main.go:141] libmachine: Using API Version  1
	I0120 13:47:23.255344 1958891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:47:23.255460 1958891 main.go:141] libmachine: Using API Version  1
	I0120 13:47:23.255490 1958891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:47:23.255714 1958891 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:47:23.255843 1958891 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:47:23.256015 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetState
	I0120 13:47:23.256392 1958891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:47:23.256465 1958891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:47:23.258736 1958891 kapi.go:59] client config for test-preload-988350: &rest.Config{Host:"https://192.168.39.87:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/test-preload-988350/client.crt", KeyFile:"/home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/test-preload-988350/client.key", CAFile:"/home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243bda0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0120 13:47:23.259029 1958891 addons.go:238] Setting addon default-storageclass=true in "test-preload-988350"
	W0120 13:47:23.259045 1958891 addons.go:247] addon default-storageclass should already be in state true
	I0120 13:47:23.259069 1958891 host.go:66] Checking if "test-preload-988350" exists ...
	I0120 13:47:23.259335 1958891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:47:23.259391 1958891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:47:23.272831 1958891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41513
	I0120 13:47:23.273365 1958891 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:47:23.273968 1958891 main.go:141] libmachine: Using API Version  1
	I0120 13:47:23.273998 1958891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:47:23.274358 1958891 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:47:23.274559 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetState
	I0120 13:47:23.275300 1958891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42709
	I0120 13:47:23.275807 1958891 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:47:23.276345 1958891 main.go:141] libmachine: Using API Version  1
	I0120 13:47:23.276375 1958891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:47:23.276691 1958891 main.go:141] libmachine: (test-preload-988350) Calling .DriverName
	I0120 13:47:23.276814 1958891 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:47:23.277453 1958891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:47:23.277509 1958891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:47:23.278730 1958891 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:47:23.280067 1958891 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 13:47:23.280084 1958891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 13:47:23.280104 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHHostname
	I0120 13:47:23.283549 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:47:23.284006 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:47:23.284036 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:47:23.284237 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHPort
	I0120 13:47:23.284459 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHKeyPath
	I0120 13:47:23.284616 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHUsername
	I0120 13:47:23.284756 1958891 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/test-preload-988350/id_rsa Username:docker}
	I0120 13:47:23.319448 1958891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41997
	I0120 13:47:23.319906 1958891 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:47:23.320483 1958891 main.go:141] libmachine: Using API Version  1
	I0120 13:47:23.320507 1958891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:47:23.320902 1958891 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:47:23.321143 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetState
	I0120 13:47:23.322894 1958891 main.go:141] libmachine: (test-preload-988350) Calling .DriverName
	I0120 13:47:23.323181 1958891 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 13:47:23.323204 1958891 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 13:47:23.323228 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHHostname
	I0120 13:47:23.326243 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:47:23.326680 1958891 main.go:141] libmachine: (test-preload-988350) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:26:e1", ip: ""} in network mk-test-preload-988350: {Iface:virbr1 ExpiryTime:2025-01-20 14:46:50 +0000 UTC Type:0 Mac:52:54:00:a0:26:e1 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:test-preload-988350 Clientid:01:52:54:00:a0:26:e1}
	I0120 13:47:23.326722 1958891 main.go:141] libmachine: (test-preload-988350) DBG | domain test-preload-988350 has defined IP address 192.168.39.87 and MAC address 52:54:00:a0:26:e1 in network mk-test-preload-988350
	I0120 13:47:23.326911 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHPort
	I0120 13:47:23.327109 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHKeyPath
	I0120 13:47:23.327281 1958891 main.go:141] libmachine: (test-preload-988350) Calling .GetSSHUsername
	I0120 13:47:23.327441 1958891 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/test-preload-988350/id_rsa Username:docker}
	I0120 13:47:23.425592 1958891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 13:47:23.442821 1958891 node_ready.go:35] waiting up to 6m0s for node "test-preload-988350" to be "Ready" ...
	I0120 13:47:23.528220 1958891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 13:47:23.543926 1958891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 13:47:24.535629 1958891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.007357062s)
	I0120 13:47:24.535695 1958891 main.go:141] libmachine: Making call to close driver server
	I0120 13:47:24.535709 1958891 main.go:141] libmachine: (test-preload-988350) Calling .Close
	I0120 13:47:24.536026 1958891 main.go:141] libmachine: Successfully made call to close driver server
	I0120 13:47:24.536038 1958891 main.go:141] libmachine: (test-preload-988350) DBG | Closing plugin on server side
	I0120 13:47:24.536054 1958891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 13:47:24.536067 1958891 main.go:141] libmachine: Making call to close driver server
	I0120 13:47:24.536075 1958891 main.go:141] libmachine: (test-preload-988350) Calling .Close
	I0120 13:47:24.536361 1958891 main.go:141] libmachine: Successfully made call to close driver server
	I0120 13:47:24.536392 1958891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 13:47:24.536414 1958891 main.go:141] libmachine: (test-preload-988350) DBG | Closing plugin on server side
	I0120 13:47:24.545151 1958891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.001176225s)
	I0120 13:47:24.545202 1958891 main.go:141] libmachine: Making call to close driver server
	I0120 13:47:24.545216 1958891 main.go:141] libmachine: (test-preload-988350) Calling .Close
	I0120 13:47:24.545510 1958891 main.go:141] libmachine: (test-preload-988350) DBG | Closing plugin on server side
	I0120 13:47:24.545564 1958891 main.go:141] libmachine: Successfully made call to close driver server
	I0120 13:47:24.545576 1958891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 13:47:24.545589 1958891 main.go:141] libmachine: Making call to close driver server
	I0120 13:47:24.545600 1958891 main.go:141] libmachine: (test-preload-988350) Calling .Close
	I0120 13:47:24.545846 1958891 main.go:141] libmachine: Successfully made call to close driver server
	I0120 13:47:24.545864 1958891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 13:47:24.545870 1958891 main.go:141] libmachine: (test-preload-988350) DBG | Closing plugin on server side
	I0120 13:47:24.553462 1958891 main.go:141] libmachine: Making call to close driver server
	I0120 13:47:24.553483 1958891 main.go:141] libmachine: (test-preload-988350) Calling .Close
	I0120 13:47:24.553747 1958891 main.go:141] libmachine: Successfully made call to close driver server
	I0120 13:47:24.553762 1958891 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 13:47:24.553771 1958891 main.go:141] libmachine: (test-preload-988350) DBG | Closing plugin on server side
	I0120 13:47:24.556248 1958891 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0120 13:47:24.557416 1958891 addons.go:514] duration metric: took 1.320605845s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0120 13:47:25.446215 1958891 node_ready.go:53] node "test-preload-988350" has status "Ready":"False"
	I0120 13:47:27.447093 1958891 node_ready.go:53] node "test-preload-988350" has status "Ready":"False"
	I0120 13:47:29.946997 1958891 node_ready.go:53] node "test-preload-988350" has status "Ready":"False"
	I0120 13:47:30.947929 1958891 node_ready.go:49] node "test-preload-988350" has status "Ready":"True"
	I0120 13:47:30.947956 1958891 node_ready.go:38] duration metric: took 7.505094131s for node "test-preload-988350" to be "Ready" ...
	I0120 13:47:30.947968 1958891 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 13:47:30.954929 1958891 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-z8zf7" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:30.962525 1958891 pod_ready.go:93] pod "coredns-6d4b75cb6d-z8zf7" in "kube-system" namespace has status "Ready":"True"
	I0120 13:47:30.962549 1958891 pod_ready.go:82] duration metric: took 7.592915ms for pod "coredns-6d4b75cb6d-z8zf7" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:30.962559 1958891 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-988350" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:30.970028 1958891 pod_ready.go:93] pod "etcd-test-preload-988350" in "kube-system" namespace has status "Ready":"True"
	I0120 13:47:30.970053 1958891 pod_ready.go:82] duration metric: took 7.48685ms for pod "etcd-test-preload-988350" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:30.970065 1958891 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-988350" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:30.976160 1958891 pod_ready.go:93] pod "kube-apiserver-test-preload-988350" in "kube-system" namespace has status "Ready":"True"
	I0120 13:47:30.976185 1958891 pod_ready.go:82] duration metric: took 6.106378ms for pod "kube-apiserver-test-preload-988350" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:30.976194 1958891 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-988350" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:30.981840 1958891 pod_ready.go:93] pod "kube-controller-manager-test-preload-988350" in "kube-system" namespace has status "Ready":"True"
	I0120 13:47:30.981862 1958891 pod_ready.go:82] duration metric: took 5.66205ms for pod "kube-controller-manager-test-preload-988350" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:30.981872 1958891 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ngvnk" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:31.346572 1958891 pod_ready.go:93] pod "kube-proxy-ngvnk" in "kube-system" namespace has status "Ready":"True"
	I0120 13:47:31.346599 1958891 pod_ready.go:82] duration metric: took 364.719304ms for pod "kube-proxy-ngvnk" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:31.346631 1958891 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-988350" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:31.747071 1958891 pod_ready.go:93] pod "kube-scheduler-test-preload-988350" in "kube-system" namespace has status "Ready":"True"
	I0120 13:47:31.747106 1958891 pod_ready.go:82] duration metric: took 400.464437ms for pod "kube-scheduler-test-preload-988350" in "kube-system" namespace to be "Ready" ...
	I0120 13:47:31.747120 1958891 pod_ready.go:39] duration metric: took 799.136715ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 13:47:31.747141 1958891 api_server.go:52] waiting for apiserver process to appear ...
	I0120 13:47:31.747226 1958891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 13:47:31.763693 1958891 api_server.go:72] duration metric: took 8.526838235s to wait for apiserver process to appear ...
	I0120 13:47:31.763742 1958891 api_server.go:88] waiting for apiserver healthz status ...
	I0120 13:47:31.763785 1958891 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0120 13:47:31.768907 1958891 api_server.go:279] https://192.168.39.87:8443/healthz returned 200:
	ok
	I0120 13:47:31.769783 1958891 api_server.go:141] control plane version: v1.24.4
	I0120 13:47:31.769819 1958891 api_server.go:131] duration metric: took 6.057962ms to wait for apiserver health ...
	I0120 13:47:31.769831 1958891 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 13:47:31.950876 1958891 system_pods.go:59] 7 kube-system pods found
	I0120 13:47:31.950916 1958891 system_pods.go:61] "coredns-6d4b75cb6d-z8zf7" [ed1e691a-b489-4aa6-86bd-b33c763327b1] Running
	I0120 13:47:31.950924 1958891 system_pods.go:61] "etcd-test-preload-988350" [b8fb701d-d347-4d18-864a-42631476e07c] Running
	I0120 13:47:31.950929 1958891 system_pods.go:61] "kube-apiserver-test-preload-988350" [e53662ef-dbb0-43eb-a3cd-e34b93ec2e84] Running
	I0120 13:47:31.950934 1958891 system_pods.go:61] "kube-controller-manager-test-preload-988350" [7c2b9384-405a-49ff-a45c-91762d57c5f2] Running
	I0120 13:47:31.950939 1958891 system_pods.go:61] "kube-proxy-ngvnk" [f943b93d-6df2-408a-8192-471496875019] Running
	I0120 13:47:31.950942 1958891 system_pods.go:61] "kube-scheduler-test-preload-988350" [9d6aedb0-e187-41b3-96d5-b3c4277e3987] Running
	I0120 13:47:31.950945 1958891 system_pods.go:61] "storage-provisioner" [2d1d9ca0-e2c6-4b87-9c01-d72721f1262d] Running
	I0120 13:47:31.950954 1958891 system_pods.go:74] duration metric: took 181.114578ms to wait for pod list to return data ...
	I0120 13:47:31.950964 1958891 default_sa.go:34] waiting for default service account to be created ...
	I0120 13:47:32.146435 1958891 default_sa.go:45] found service account: "default"
	I0120 13:47:32.146477 1958891 default_sa.go:55] duration metric: took 195.504548ms for default service account to be created ...
	I0120 13:47:32.146490 1958891 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 13:47:32.350012 1958891 system_pods.go:87] 7 kube-system pods found
	I0120 13:47:32.547707 1958891 system_pods.go:105] "coredns-6d4b75cb6d-z8zf7" [ed1e691a-b489-4aa6-86bd-b33c763327b1] Running
	I0120 13:47:32.547732 1958891 system_pods.go:105] "etcd-test-preload-988350" [b8fb701d-d347-4d18-864a-42631476e07c] Running
	I0120 13:47:32.547738 1958891 system_pods.go:105] "kube-apiserver-test-preload-988350" [e53662ef-dbb0-43eb-a3cd-e34b93ec2e84] Running
	I0120 13:47:32.547743 1958891 system_pods.go:105] "kube-controller-manager-test-preload-988350" [7c2b9384-405a-49ff-a45c-91762d57c5f2] Running
	I0120 13:47:32.547747 1958891 system_pods.go:105] "kube-proxy-ngvnk" [f943b93d-6df2-408a-8192-471496875019] Running
	I0120 13:47:32.547751 1958891 system_pods.go:105] "kube-scheduler-test-preload-988350" [9d6aedb0-e187-41b3-96d5-b3c4277e3987] Running
	I0120 13:47:32.547755 1958891 system_pods.go:105] "storage-provisioner" [2d1d9ca0-e2c6-4b87-9c01-d72721f1262d] Running
	I0120 13:47:32.547762 1958891 system_pods.go:147] duration metric: took 401.265611ms to wait for k8s-apps to be running ...
	I0120 13:47:32.547770 1958891 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 13:47:32.547827 1958891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:47:32.565714 1958891 system_svc.go:56] duration metric: took 17.931122ms WaitForService to wait for kubelet
	I0120 13:47:32.565750 1958891 kubeadm.go:582] duration metric: took 9.328963231s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 13:47:32.565769 1958891 node_conditions.go:102] verifying NodePressure condition ...
	I0120 13:47:32.748387 1958891 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 13:47:32.748416 1958891 node_conditions.go:123] node cpu capacity is 2
	I0120 13:47:32.748427 1958891 node_conditions.go:105] duration metric: took 182.653233ms to run NodePressure ...
	I0120 13:47:32.748437 1958891 start.go:241] waiting for startup goroutines ...
	I0120 13:47:32.748444 1958891 start.go:246] waiting for cluster config update ...
	I0120 13:47:32.748454 1958891 start.go:255] writing updated cluster config ...
	I0120 13:47:32.748742 1958891 ssh_runner.go:195] Run: rm -f paused
	I0120 13:47:32.799335 1958891 start.go:600] kubectl: 1.32.1, cluster: 1.24.4 (minor skew: 8)
	I0120 13:47:32.801294 1958891 out.go:201] 
	W0120 13:47:32.802599 1958891 out.go:270] ! /usr/local/bin/kubectl is version 1.32.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0120 13:47:32.803842 1958891 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0120 13:47:32.805146 1958891 out.go:177] * Done! kubectl is now configured to use "test-preload-988350" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.769802970Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737380853769780101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=528c77b0-5cc1-4a0a-8654-73132cbd3873 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.770450698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab406245-1750-4335-bd31-328978592506 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.770567874Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab406245-1750-4335-bd31-328978592506 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.770741736Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f942c2209464bbc418e456c1a21f9f071e8f5539ab7782a80e6d31e65628c928,PodSandboxId:4f55a16fcccd027f1ce1aafa2cbdaf6503367b2a93f0d608a03a24e46d66da4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737380849665409814,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-z8zf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed1e691a-b489-4aa6-86bd-b33c763327b1,},Annotations:map[string]string{io.kubernetes.container.hash: ab1ee18f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfafaea1f4a36004151dcd3c15c67d4aebac1c524e91d5cde2f2ea816cf91b2a,PodSandboxId:68a1dc452230340b08b1f9bdfd0dc834f1e18a050483fd5ad6a411c5b1ca6329,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737380842419178225,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 2d1d9ca0-e2c6-4b87-9c01-d72721f1262d,},Annotations:map[string]string{io.kubernetes.container.hash: ae069762,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4ba3b8f63fd2566db53107be5ebd0ddfedc3596d5e2736ac54c108cf4a1c584,PodSandboxId:0e45596a4885696bf38673c80833102649ed222e0e8751c063f633b1cf455e9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737380842370399221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngvnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9
43b93d-6df2-408a-8192-471496875019,},Annotations:map[string]string{io.kubernetes.container.hash: 5711140f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbabde57e33ff0804c617a4a35c468bd4084ae553fc37797a3ba03ade9e14dd1,PodSandboxId:223a5b7e4bbb922c9af0097c7c961711d61269479e2e995743ab8201a3046463,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737380836213451121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-988350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ab2ac26
41b2cd24bfbb6d2ed535b9b,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bc349008ca9ca99ac00c5ad068527b5744d546c8d0289638447d467b6d0d33b,PodSandboxId:024484cf1628f927a4e61f8e603d6d1a1b47fbc5ab9116198b848762c3807415,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737380836144295778,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-988350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 859909ce0fd6c01babfc870c13b40b08,},Annotations:map
[string]string{io.kubernetes.container.hash: a1214f6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8a4c178c279e1ba8946e08f07c226e703bfccb61edaec3c07ef50005b60edd,PodSandboxId:45fcafe064a1441a6948afdd00bf26b8ad0b3f25102b9e303c2958743f9188d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737380836115569331,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-988350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad47eae1cf829d038c00315c24c89e3b,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13fa919ddeab8d45bf95b8419f7bf1b932c2d9d107c19fda4e5ed6074c136f3,PodSandboxId:aab527d9bd5475b37ac04b8882c6cc82d6483cb60b9f802a5657941bce9c5b8c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737380836040825746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-988350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3410c32ce6611794c30f2b77041f54f7,},Annotation
s:map[string]string{io.kubernetes.container.hash: dee1cff7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab406245-1750-4335-bd31-328978592506 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.812308973Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5cf17ff-8907-4993-b2f2-c1e093a37f59 name=/runtime.v1.RuntimeService/Version
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.812400608Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5cf17ff-8907-4993-b2f2-c1e093a37f59 name=/runtime.v1.RuntimeService/Version
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.814049763Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2af98a56-61b2-4290-9499-f1bb345c1bff name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.814612857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737380853814589929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2af98a56-61b2-4290-9499-f1bb345c1bff name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.815146670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7cdb162-97f0-4dd4-9127-65f73ce21853 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.815197904Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7cdb162-97f0-4dd4-9127-65f73ce21853 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.815355418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f942c2209464bbc418e456c1a21f9f071e8f5539ab7782a80e6d31e65628c928,PodSandboxId:4f55a16fcccd027f1ce1aafa2cbdaf6503367b2a93f0d608a03a24e46d66da4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737380849665409814,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-z8zf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed1e691a-b489-4aa6-86bd-b33c763327b1,},Annotations:map[string]string{io.kubernetes.container.hash: ab1ee18f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfafaea1f4a36004151dcd3c15c67d4aebac1c524e91d5cde2f2ea816cf91b2a,PodSandboxId:68a1dc452230340b08b1f9bdfd0dc834f1e18a050483fd5ad6a411c5b1ca6329,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737380842419178225,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 2d1d9ca0-e2c6-4b87-9c01-d72721f1262d,},Annotations:map[string]string{io.kubernetes.container.hash: ae069762,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4ba3b8f63fd2566db53107be5ebd0ddfedc3596d5e2736ac54c108cf4a1c584,PodSandboxId:0e45596a4885696bf38673c80833102649ed222e0e8751c063f633b1cf455e9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737380842370399221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngvnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9
43b93d-6df2-408a-8192-471496875019,},Annotations:map[string]string{io.kubernetes.container.hash: 5711140f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbabde57e33ff0804c617a4a35c468bd4084ae553fc37797a3ba03ade9e14dd1,PodSandboxId:223a5b7e4bbb922c9af0097c7c961711d61269479e2e995743ab8201a3046463,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737380836213451121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-988350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ab2ac26
41b2cd24bfbb6d2ed535b9b,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bc349008ca9ca99ac00c5ad068527b5744d546c8d0289638447d467b6d0d33b,PodSandboxId:024484cf1628f927a4e61f8e603d6d1a1b47fbc5ab9116198b848762c3807415,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737380836144295778,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-988350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 859909ce0fd6c01babfc870c13b40b08,},Annotations:map
[string]string{io.kubernetes.container.hash: a1214f6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8a4c178c279e1ba8946e08f07c226e703bfccb61edaec3c07ef50005b60edd,PodSandboxId:45fcafe064a1441a6948afdd00bf26b8ad0b3f25102b9e303c2958743f9188d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737380836115569331,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-988350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad47eae1cf829d038c00315c24c89e3b,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13fa919ddeab8d45bf95b8419f7bf1b932c2d9d107c19fda4e5ed6074c136f3,PodSandboxId:aab527d9bd5475b37ac04b8882c6cc82d6483cb60b9f802a5657941bce9c5b8c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737380836040825746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-988350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3410c32ce6611794c30f2b77041f54f7,},Annotation
s:map[string]string{io.kubernetes.container.hash: dee1cff7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7cdb162-97f0-4dd4-9127-65f73ce21853 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.852253242Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11be03d2-9190-408f-897d-d765bfb713e5 name=/runtime.v1.RuntimeService/Version
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.852324862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11be03d2-9190-408f-897d-d765bfb713e5 name=/runtime.v1.RuntimeService/Version
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.853182525Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab97dd13-f427-4b0a-ad96-5cdb5e245bbb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.853823786Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737380853853799620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab97dd13-f427-4b0a-ad96-5cdb5e245bbb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.854321925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54cf5304-b200-4d68-b1da-5bd13b36023e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.854371172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54cf5304-b200-4d68-b1da-5bd13b36023e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.854602841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f942c2209464bbc418e456c1a21f9f071e8f5539ab7782a80e6d31e65628c928,PodSandboxId:4f55a16fcccd027f1ce1aafa2cbdaf6503367b2a93f0d608a03a24e46d66da4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737380849665409814,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-z8zf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed1e691a-b489-4aa6-86bd-b33c763327b1,},Annotations:map[string]string{io.kubernetes.container.hash: ab1ee18f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfafaea1f4a36004151dcd3c15c67d4aebac1c524e91d5cde2f2ea816cf91b2a,PodSandboxId:68a1dc452230340b08b1f9bdfd0dc834f1e18a050483fd5ad6a411c5b1ca6329,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737380842419178225,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 2d1d9ca0-e2c6-4b87-9c01-d72721f1262d,},Annotations:map[string]string{io.kubernetes.container.hash: ae069762,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4ba3b8f63fd2566db53107be5ebd0ddfedc3596d5e2736ac54c108cf4a1c584,PodSandboxId:0e45596a4885696bf38673c80833102649ed222e0e8751c063f633b1cf455e9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737380842370399221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngvnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9
43b93d-6df2-408a-8192-471496875019,},Annotations:map[string]string{io.kubernetes.container.hash: 5711140f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbabde57e33ff0804c617a4a35c468bd4084ae553fc37797a3ba03ade9e14dd1,PodSandboxId:223a5b7e4bbb922c9af0097c7c961711d61269479e2e995743ab8201a3046463,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737380836213451121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-988350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ab2ac26
41b2cd24bfbb6d2ed535b9b,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bc349008ca9ca99ac00c5ad068527b5744d546c8d0289638447d467b6d0d33b,PodSandboxId:024484cf1628f927a4e61f8e603d6d1a1b47fbc5ab9116198b848762c3807415,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737380836144295778,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-988350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 859909ce0fd6c01babfc870c13b40b08,},Annotations:map
[string]string{io.kubernetes.container.hash: a1214f6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8a4c178c279e1ba8946e08f07c226e703bfccb61edaec3c07ef50005b60edd,PodSandboxId:45fcafe064a1441a6948afdd00bf26b8ad0b3f25102b9e303c2958743f9188d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737380836115569331,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-988350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad47eae1cf829d038c00315c24c89e3b,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13fa919ddeab8d45bf95b8419f7bf1b932c2d9d107c19fda4e5ed6074c136f3,PodSandboxId:aab527d9bd5475b37ac04b8882c6cc82d6483cb60b9f802a5657941bce9c5b8c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737380836040825746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-988350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3410c32ce6611794c30f2b77041f54f7,},Annotation
s:map[string]string{io.kubernetes.container.hash: dee1cff7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54cf5304-b200-4d68-b1da-5bd13b36023e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.888245516Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff29abd2-4611-47b6-ba17-697d054915c4 name=/runtime.v1.RuntimeService/Version
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.888315887Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff29abd2-4611-47b6-ba17-697d054915c4 name=/runtime.v1.RuntimeService/Version
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.889333332Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29cbf030-7093-4a7f-9b2d-b032ef244485 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.889880446Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737380853889858302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29cbf030-7093-4a7f-9b2d-b032ef244485 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.890439108Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d016289-6f2f-4079-86e0-8f3b35cde572 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.890554614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d016289-6f2f-4079-86e0-8f3b35cde572 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 13:47:33 test-preload-988350 crio[669]: time="2025-01-20 13:47:33.890725544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f942c2209464bbc418e456c1a21f9f071e8f5539ab7782a80e6d31e65628c928,PodSandboxId:4f55a16fcccd027f1ce1aafa2cbdaf6503367b2a93f0d608a03a24e46d66da4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1737380849665409814,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-z8zf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed1e691a-b489-4aa6-86bd-b33c763327b1,},Annotations:map[string]string{io.kubernetes.container.hash: ab1ee18f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfafaea1f4a36004151dcd3c15c67d4aebac1c524e91d5cde2f2ea816cf91b2a,PodSandboxId:68a1dc452230340b08b1f9bdfd0dc834f1e18a050483fd5ad6a411c5b1ca6329,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737380842419178225,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 2d1d9ca0-e2c6-4b87-9c01-d72721f1262d,},Annotations:map[string]string{io.kubernetes.container.hash: ae069762,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4ba3b8f63fd2566db53107be5ebd0ddfedc3596d5e2736ac54c108cf4a1c584,PodSandboxId:0e45596a4885696bf38673c80833102649ed222e0e8751c063f633b1cf455e9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1737380842370399221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngvnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9
43b93d-6df2-408a-8192-471496875019,},Annotations:map[string]string{io.kubernetes.container.hash: 5711140f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbabde57e33ff0804c617a4a35c468bd4084ae553fc37797a3ba03ade9e14dd1,PodSandboxId:223a5b7e4bbb922c9af0097c7c961711d61269479e2e995743ab8201a3046463,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1737380836213451121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-988350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ab2ac26
41b2cd24bfbb6d2ed535b9b,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bc349008ca9ca99ac00c5ad068527b5744d546c8d0289638447d467b6d0d33b,PodSandboxId:024484cf1628f927a4e61f8e603d6d1a1b47fbc5ab9116198b848762c3807415,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1737380836144295778,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-988350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 859909ce0fd6c01babfc870c13b40b08,},Annotations:map
[string]string{io.kubernetes.container.hash: a1214f6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f8a4c178c279e1ba8946e08f07c226e703bfccb61edaec3c07ef50005b60edd,PodSandboxId:45fcafe064a1441a6948afdd00bf26b8ad0b3f25102b9e303c2958743f9188d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1737380836115569331,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-988350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad47eae1cf829d038c00315c24c89e3b,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b13fa919ddeab8d45bf95b8419f7bf1b932c2d9d107c19fda4e5ed6074c136f3,PodSandboxId:aab527d9bd5475b37ac04b8882c6cc82d6483cb60b9f802a5657941bce9c5b8c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1737380836040825746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-988350,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3410c32ce6611794c30f2b77041f54f7,},Annotation
s:map[string]string{io.kubernetes.container.hash: dee1cff7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d016289-6f2f-4079-86e0-8f3b35cde572 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f942c2209464b       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   4 seconds ago       Running             coredns                   1                   4f55a16fcccd0       coredns-6d4b75cb6d-z8zf7
	bfafaea1f4a36       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Running             storage-provisioner       1                   68a1dc4522303       storage-provisioner
	b4ba3b8f63fd2       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   11 seconds ago      Running             kube-proxy                1                   0e45596a48856       kube-proxy-ngvnk
	bbabde57e33ff       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   17 seconds ago      Running             kube-scheduler            1                   223a5b7e4bbb9       kube-scheduler-test-preload-988350
	3bc349008ca9c       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   17 seconds ago      Running             etcd                      1                   024484cf1628f       etcd-test-preload-988350
	1f8a4c178c279       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   17 seconds ago      Running             kube-controller-manager   1                   45fcafe064a14       kube-controller-manager-test-preload-988350
	b13fa919ddeab       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   17 seconds ago      Running             kube-apiserver            1                   aab527d9bd547       kube-apiserver-test-preload-988350
	
	
	==> coredns [f942c2209464bbc418e456c1a21f9f071e8f5539ab7782a80e6d31e65628c928] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:35061 - 20085 "HINFO IN 8297231305145905270.272614613649573583. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.029551329s
	
	
	==> describe nodes <==
	Name:               test-preload-988350
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-988350
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3
	                    minikube.k8s.io/name=test-preload-988350
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T13_46_03_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 13:46:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-988350
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 13:47:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 13:47:30 +0000   Mon, 20 Jan 2025 13:45:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 13:47:30 +0000   Mon, 20 Jan 2025 13:45:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 13:47:30 +0000   Mon, 20 Jan 2025 13:45:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 13:47:30 +0000   Mon, 20 Jan 2025 13:47:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    test-preload-988350
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a58ad9b006943028963f304b67a73c7
	  System UUID:                0a58ad9b-0069-4302-8963-f304b67a73c7
	  Boot ID:                    4872ba50-ee3b-4143-91bc-2b773b950ba0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-z8zf7                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     77s
	  kube-system                 etcd-test-preload-988350                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         90s
	  kube-system                 kube-apiserver-test-preload-988350             250m (12%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-controller-manager-test-preload-988350    200m (10%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-ngvnk                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-test-preload-988350             100m (5%)     0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 76s                kube-proxy       
	  Normal  Starting                 11s                kube-proxy       
	  Normal  NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 99s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s (x5 over 99s)  kubelet          Node test-preload-988350 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     99s (x4 over 99s)  kubelet          Node test-preload-988350 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    99s (x4 over 99s)  kubelet          Node test-preload-988350 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s                kubelet          Node test-preload-988350 status is now: NodeHasSufficientPID
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  91s                kubelet          Node test-preload-988350 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s                kubelet          Node test-preload-988350 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                81s                kubelet          Node test-preload-988350 status is now: NodeReady
	  Normal  RegisteredNode           78s                node-controller  Node test-preload-988350 event: Registered Node test-preload-988350 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node test-preload-988350 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node test-preload-988350 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node test-preload-988350 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node test-preload-988350 event: Registered Node test-preload-988350 in Controller
	
	
	==> dmesg <==
	[Jan20 13:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053327] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042335] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.989191] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.892806] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.656175] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.144680] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.063022] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055373] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.170060] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.129495] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.275147] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[Jan20 13:47] systemd-fstab-generator[990]: Ignoring "noauto" option for root device
	[  +0.062144] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.830196] systemd-fstab-generator[1120]: Ignoring "noauto" option for root device
	[  +6.200519] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.951815] systemd-fstab-generator[1767]: Ignoring "noauto" option for root device
	[  +6.103030] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [3bc349008ca9ca99ac00c5ad068527b5744d546c8d0289638447d467b6d0d33b] <==
	{"level":"info","ts":"2025-01-20T13:47:16.516Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"aad771494ea7416a","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-01-20T13:47:16.530Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-01-20T13:47:16.535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a switched to configuration voters=(12310432666106675562)"}
	{"level":"info","ts":"2025-01-20T13:47:16.536Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8794d44e1d88e05d","local-member-id":"aad771494ea7416a","added-peer-id":"aad771494ea7416a","added-peer-peer-urls":["https://192.168.39.87:2380"]}
	{"level":"info","ts":"2025-01-20T13:47:16.537Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8794d44e1d88e05d","local-member-id":"aad771494ea7416a","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T13:47:16.537Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T13:47:16.545Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-20T13:47:16.545Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aad771494ea7416a","initial-advertise-peer-urls":["https://192.168.39.87:2380"],"listen-peer-urls":["https://192.168.39.87:2380"],"advertise-client-urls":["https://192.168.39.87:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.87:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-20T13:47:16.546Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-20T13:47:16.546Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2025-01-20T13:47:16.548Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2025-01-20T13:47:17.785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-20T13:47:17.785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-20T13:47:17.785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a received MsgPreVoteResp from aad771494ea7416a at term 2"}
	{"level":"info","ts":"2025-01-20T13:47:17.785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became candidate at term 3"}
	{"level":"info","ts":"2025-01-20T13:47:17.785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a received MsgVoteResp from aad771494ea7416a at term 3"}
	{"level":"info","ts":"2025-01-20T13:47:17.785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became leader at term 3"}
	{"level":"info","ts":"2025-01-20T13:47:17.785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aad771494ea7416a elected leader aad771494ea7416a at term 3"}
	{"level":"info","ts":"2025-01-20T13:47:17.786Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aad771494ea7416a","local-member-attributes":"{Name:test-preload-988350 ClientURLs:[https://192.168.39.87:2379]}","request-path":"/0/members/aad771494ea7416a/attributes","cluster-id":"8794d44e1d88e05d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-20T13:47:17.786Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-20T13:47:17.788Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.87:2379"}
	{"level":"info","ts":"2025-01-20T13:47:17.788Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-20T13:47:17.789Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-20T13:47:17.789Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-20T13:47:17.789Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:47:34 up 0 min,  0 users,  load average: 0.97, 0.28, 0.09
	Linux test-preload-988350 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b13fa919ddeab8d45bf95b8419f7bf1b932c2d9d107c19fda4e5ed6074c136f3] <==
	I0120 13:47:20.339951       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0120 13:47:20.339970       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0120 13:47:20.343174       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0120 13:47:20.367461       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0120 13:47:20.343287       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0120 13:47:20.343299       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0120 13:47:20.444706       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0120 13:47:20.449540       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0120 13:47:20.461418       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0120 13:47:20.467617       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0120 13:47:20.498803       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0120 13:47:20.502884       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0120 13:47:20.503065       1 cache.go:39] Caches are synced for autoregister controller
	I0120 13:47:20.503454       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0120 13:47:20.534397       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0120 13:47:20.992963       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0120 13:47:21.312567       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0120 13:47:22.087674       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0120 13:47:22.102589       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0120 13:47:22.147857       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0120 13:47:22.173996       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0120 13:47:22.188336       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0120 13:47:22.760644       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0120 13:47:33.479424       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0120 13:47:33.529446       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [1f8a4c178c279e1ba8946e08f07c226e703bfccb61edaec3c07ef50005b60edd] <==
	I0120 13:47:33.322609       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0120 13:47:33.323050       1 shared_informer.go:262] Caches are synced for crt configmap
	I0120 13:47:33.323165       1 shared_informer.go:262] Caches are synced for stateful set
	I0120 13:47:33.324576       1 shared_informer.go:262] Caches are synced for TTL
	I0120 13:47:33.328218       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0120 13:47:33.329538       1 shared_informer.go:262] Caches are synced for endpoint
	I0120 13:47:33.331193       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0120 13:47:33.334066       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0120 13:47:33.341204       1 shared_informer.go:262] Caches are synced for PVC protection
	I0120 13:47:33.347284       1 shared_informer.go:262] Caches are synced for daemon sets
	I0120 13:47:33.372932       1 shared_informer.go:262] Caches are synced for taint
	I0120 13:47:33.373128       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0120 13:47:33.373269       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0120 13:47:33.373461       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-988350. Assuming now as a timestamp.
	I0120 13:47:33.373629       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0120 13:47:33.373743       1 event.go:294] "Event occurred" object="test-preload-988350" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-988350 event: Registered Node test-preload-988350 in Controller"
	I0120 13:47:33.428878       1 shared_informer.go:262] Caches are synced for persistent volume
	I0120 13:47:33.433894       1 shared_informer.go:262] Caches are synced for resource quota
	I0120 13:47:33.460188       1 shared_informer.go:262] Caches are synced for resource quota
	I0120 13:47:33.470410       1 shared_informer.go:262] Caches are synced for expand
	I0120 13:47:33.509447       1 shared_informer.go:262] Caches are synced for attach detach
	I0120 13:47:33.525848       1 shared_informer.go:262] Caches are synced for PV protection
	I0120 13:47:33.986911       1 shared_informer.go:262] Caches are synced for garbage collector
	I0120 13:47:33.986944       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0120 13:47:34.012141       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [b4ba3b8f63fd2566db53107be5ebd0ddfedc3596d5e2736ac54c108cf4a1c584] <==
	I0120 13:47:22.711419       1 node.go:163] Successfully retrieved node IP: 192.168.39.87
	I0120 13:47:22.711690       1 server_others.go:138] "Detected node IP" address="192.168.39.87"
	I0120 13:47:22.711763       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0120 13:47:22.745736       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0120 13:47:22.745814       1 server_others.go:206] "Using iptables Proxier"
	I0120 13:47:22.746148       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0120 13:47:22.747190       1 server.go:661] "Version info" version="v1.24.4"
	I0120 13:47:22.747238       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 13:47:22.748933       1 config.go:317] "Starting service config controller"
	I0120 13:47:22.749194       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0120 13:47:22.749243       1 config.go:226] "Starting endpoint slice config controller"
	I0120 13:47:22.749260       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0120 13:47:22.750346       1 config.go:444] "Starting node config controller"
	I0120 13:47:22.750385       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0120 13:47:22.849627       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0120 13:47:22.849661       1 shared_informer.go:262] Caches are synced for service config
	I0120 13:47:22.851149       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [bbabde57e33ff0804c617a4a35c468bd4084ae553fc37797a3ba03ade9e14dd1] <==
	I0120 13:47:17.688731       1 serving.go:348] Generated self-signed cert in-memory
	W0120 13:47:20.369249       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0120 13:47:20.371550       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0120 13:47:20.373559       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0120 13:47:20.373654       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0120 13:47:20.453143       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0120 13:47:20.453363       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 13:47:20.461252       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0120 13:47:20.461952       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0120 13:47:20.462071       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0120 13:47:20.463520       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0120 13:47:20.562732       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 13:47:20 test-preload-988350 kubelet[1127]: I0120 13:47:20.486574    1127 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-988350"
	Jan 20 13:47:20 test-preload-988350 kubelet[1127]: I0120 13:47:20.491995    1127 setters.go:532] "Node became not ready" node="test-preload-988350" condition={Type:Ready Status:False LastHeartbeatTime:2025-01-20 13:47:20.491940322 +0000 UTC m=+5.301132845 LastTransitionTime:2025-01-20 13:47:20.491940322 +0000 UTC m=+5.301132845 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Jan 20 13:47:21 test-preload-988350 kubelet[1127]: I0120 13:47:21.351771    1127 apiserver.go:52] "Watching apiserver"
	Jan 20 13:47:21 test-preload-988350 kubelet[1127]: I0120 13:47:21.356689    1127 topology_manager.go:200] "Topology Admit Handler"
	Jan 20 13:47:21 test-preload-988350 kubelet[1127]: I0120 13:47:21.356797    1127 topology_manager.go:200] "Topology Admit Handler"
	Jan 20 13:47:21 test-preload-988350 kubelet[1127]: I0120 13:47:21.356843    1127 topology_manager.go:200] "Topology Admit Handler"
	Jan 20 13:47:21 test-preload-988350 kubelet[1127]: E0120 13:47:21.359272    1127 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-z8zf7" podUID=ed1e691a-b489-4aa6-86bd-b33c763327b1
	Jan 20 13:47:21 test-preload-988350 kubelet[1127]: I0120 13:47:21.404397    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr8dg\" (UniqueName: \"kubernetes.io/projected/ed1e691a-b489-4aa6-86bd-b33c763327b1-kube-api-access-gr8dg\") pod \"coredns-6d4b75cb6d-z8zf7\" (UID: \"ed1e691a-b489-4aa6-86bd-b33c763327b1\") " pod="kube-system/coredns-6d4b75cb6d-z8zf7"
	Jan 20 13:47:21 test-preload-988350 kubelet[1127]: I0120 13:47:21.404561    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f943b93d-6df2-408a-8192-471496875019-xtables-lock\") pod \"kube-proxy-ngvnk\" (UID: \"f943b93d-6df2-408a-8192-471496875019\") " pod="kube-system/kube-proxy-ngvnk"
	Jan 20 13:47:21 test-preload-988350 kubelet[1127]: I0120 13:47:21.404582    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed1e691a-b489-4aa6-86bd-b33c763327b1-config-volume\") pod \"coredns-6d4b75cb6d-z8zf7\" (UID: \"ed1e691a-b489-4aa6-86bd-b33c763327b1\") " pod="kube-system/coredns-6d4b75cb6d-z8zf7"
	Jan 20 13:47:21 test-preload-988350 kubelet[1127]: I0120 13:47:21.404669    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f943b93d-6df2-408a-8192-471496875019-kube-proxy\") pod \"kube-proxy-ngvnk\" (UID: \"f943b93d-6df2-408a-8192-471496875019\") " pod="kube-system/kube-proxy-ngvnk"
	Jan 20 13:47:21 test-preload-988350 kubelet[1127]: I0120 13:47:21.404688    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmxk7\" (UniqueName: \"kubernetes.io/projected/2d1d9ca0-e2c6-4b87-9c01-d72721f1262d-kube-api-access-qmxk7\") pod \"storage-provisioner\" (UID: \"2d1d9ca0-e2c6-4b87-9c01-d72721f1262d\") " pod="kube-system/storage-provisioner"
	Jan 20 13:47:21 test-preload-988350 kubelet[1127]: I0120 13:47:21.404766    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f943b93d-6df2-408a-8192-471496875019-lib-modules\") pod \"kube-proxy-ngvnk\" (UID: \"f943b93d-6df2-408a-8192-471496875019\") " pod="kube-system/kube-proxy-ngvnk"
	Jan 20 13:47:21 test-preload-988350 kubelet[1127]: I0120 13:47:21.404791    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhzhm\" (UniqueName: \"kubernetes.io/projected/f943b93d-6df2-408a-8192-471496875019-kube-api-access-jhzhm\") pod \"kube-proxy-ngvnk\" (UID: \"f943b93d-6df2-408a-8192-471496875019\") " pod="kube-system/kube-proxy-ngvnk"
	Jan 20 13:47:21 test-preload-988350 kubelet[1127]: I0120 13:47:21.404872    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2d1d9ca0-e2c6-4b87-9c01-d72721f1262d-tmp\") pod \"storage-provisioner\" (UID: \"2d1d9ca0-e2c6-4b87-9c01-d72721f1262d\") " pod="kube-system/storage-provisioner"
	Jan 20 13:47:21 test-preload-988350 kubelet[1127]: I0120 13:47:21.404884    1127 reconciler.go:159] "Reconciler: start to sync state"
	Jan 20 13:47:21 test-preload-988350 kubelet[1127]: E0120 13:47:21.512416    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 20 13:47:21 test-preload-988350 kubelet[1127]: E0120 13:47:21.512886    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ed1e691a-b489-4aa6-86bd-b33c763327b1-config-volume podName:ed1e691a-b489-4aa6-86bd-b33c763327b1 nodeName:}" failed. No retries permitted until 2025-01-20 13:47:22.012819102 +0000 UTC m=+6.822011648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ed1e691a-b489-4aa6-86bd-b33c763327b1-config-volume") pod "coredns-6d4b75cb6d-z8zf7" (UID: "ed1e691a-b489-4aa6-86bd-b33c763327b1") : object "kube-system"/"coredns" not registered
	Jan 20 13:47:22 test-preload-988350 kubelet[1127]: E0120 13:47:22.015732    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 20 13:47:22 test-preload-988350 kubelet[1127]: E0120 13:47:22.015818    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ed1e691a-b489-4aa6-86bd-b33c763327b1-config-volume podName:ed1e691a-b489-4aa6-86bd-b33c763327b1 nodeName:}" failed. No retries permitted until 2025-01-20 13:47:23.01580308 +0000 UTC m=+7.824995604 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ed1e691a-b489-4aa6-86bd-b33c763327b1-config-volume") pod "coredns-6d4b75cb6d-z8zf7" (UID: "ed1e691a-b489-4aa6-86bd-b33c763327b1") : object "kube-system"/"coredns" not registered
	Jan 20 13:47:23 test-preload-988350 kubelet[1127]: E0120 13:47:23.023450    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 20 13:47:23 test-preload-988350 kubelet[1127]: E0120 13:47:23.023601    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ed1e691a-b489-4aa6-86bd-b33c763327b1-config-volume podName:ed1e691a-b489-4aa6-86bd-b33c763327b1 nodeName:}" failed. No retries permitted until 2025-01-20 13:47:25.023585903 +0000 UTC m=+9.832778440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ed1e691a-b489-4aa6-86bd-b33c763327b1-config-volume") pod "coredns-6d4b75cb6d-z8zf7" (UID: "ed1e691a-b489-4aa6-86bd-b33c763327b1") : object "kube-system"/"coredns" not registered
	Jan 20 13:47:23 test-preload-988350 kubelet[1127]: E0120 13:47:23.456654    1127 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-z8zf7" podUID=ed1e691a-b489-4aa6-86bd-b33c763327b1
	Jan 20 13:47:25 test-preload-988350 kubelet[1127]: E0120 13:47:25.048901    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 20 13:47:25 test-preload-988350 kubelet[1127]: E0120 13:47:25.049035    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/ed1e691a-b489-4aa6-86bd-b33c763327b1-config-volume podName:ed1e691a-b489-4aa6-86bd-b33c763327b1 nodeName:}" failed. No retries permitted until 2025-01-20 13:47:29.049002859 +0000 UTC m=+13.858195384 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ed1e691a-b489-4aa6-86bd-b33c763327b1-config-volume") pod "coredns-6d4b75cb6d-z8zf7" (UID: "ed1e691a-b489-4aa6-86bd-b33c763327b1") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [bfafaea1f4a36004151dcd3c15c67d4aebac1c524e91d5cde2f2ea816cf91b2a] <==
	I0120 13:47:22.584279       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-988350 -n test-preload-988350
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-988350 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-988350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-988350
--- FAIL: TestPreload (165.39s)

                                                
                                    
x
+
TestKubernetesUpgrade (368.49s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-377526 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-377526 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m42.400856865s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-377526] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-377526" primary control-plane node in "kubernetes-upgrade-377526" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:52:17.154248 1964401 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:52:17.154406 1964401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:52:17.154423 1964401 out.go:358] Setting ErrFile to fd 2...
	I0120 13:52:17.154430 1964401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:52:17.154817 1964401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 13:52:17.155720 1964401 out.go:352] Setting JSON to false
	I0120 13:52:17.157192 1964401 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":20083,"bootTime":1737361054,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 13:52:17.157293 1964401 start.go:139] virtualization: kvm guest
	I0120 13:52:17.159999 1964401 out.go:177] * [kubernetes-upgrade-377526] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 13:52:17.161605 1964401 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 13:52:17.161606 1964401 notify.go:220] Checking for updates...
	I0120 13:52:17.164282 1964401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 13:52:17.165719 1964401 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 13:52:17.167231 1964401 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 13:52:17.168606 1964401 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 13:52:17.169994 1964401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 13:52:17.171741 1964401 config.go:182] Loaded profile config "NoKubernetes-926915": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0120 13:52:17.171923 1964401 config.go:182] Loaded profile config "pause-324820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 13:52:17.172026 1964401 config.go:182] Loaded profile config "running-upgrade-934502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0120 13:52:17.172136 1964401 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 13:52:17.208030 1964401 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 13:52:17.209470 1964401 start.go:297] selected driver: kvm2
	I0120 13:52:17.209503 1964401 start.go:901] validating driver "kvm2" against <nil>
	I0120 13:52:17.209520 1964401 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 13:52:17.210594 1964401 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:52:17.210770 1964401 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-1920423/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 13:52:17.227553 1964401 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 13:52:17.227622 1964401 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 13:52:17.227993 1964401 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 13:52:17.228040 1964401 cni.go:84] Creating CNI manager for ""
	I0120 13:52:17.228101 1964401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 13:52:17.228120 1964401 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 13:52:17.228184 1964401 start.go:340] cluster config:
	{Name:kubernetes-upgrade-377526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-377526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:52:17.228311 1964401 iso.go:125] acquiring lock: {Name:mk18c81b0efa5cc8efe9d47dc52684752f206ae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:52:17.230374 1964401 out.go:177] * Starting "kubernetes-upgrade-377526" primary control-plane node in "kubernetes-upgrade-377526" cluster
	I0120 13:52:17.231719 1964401 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 13:52:17.231780 1964401 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0120 13:52:17.231793 1964401 cache.go:56] Caching tarball of preloaded images
	I0120 13:52:17.231902 1964401 preload.go:172] Found /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 13:52:17.231918 1964401 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0120 13:52:17.232039 1964401 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/config.json ...
	I0120 13:52:17.232066 1964401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/config.json: {Name:mk87d7a3dd2d6c93c75e4a4c2a0d3adab756afb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:52:17.232249 1964401 start.go:360] acquireMachinesLock for kubernetes-upgrade-377526: {Name:mkbf148c6fec0c722eed081be3cef9d0990100c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 13:52:23.324460 1964401 start.go:364] duration metric: took 6.092163426s to acquireMachinesLock for "kubernetes-upgrade-377526"
	I0120 13:52:23.324527 1964401 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-377526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-377526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 13:52:23.324632 1964401 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 13:52:23.326748 1964401 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0120 13:52:23.326997 1964401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:52:23.327079 1964401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:52:23.346430 1964401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45539
	I0120 13:52:23.347091 1964401 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:52:23.347944 1964401 main.go:141] libmachine: Using API Version  1
	I0120 13:52:23.347965 1964401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:52:23.348366 1964401 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:52:23.348557 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetMachineName
	I0120 13:52:23.348734 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:52:23.348898 1964401 start.go:159] libmachine.API.Create for "kubernetes-upgrade-377526" (driver="kvm2")
	I0120 13:52:23.348933 1964401 client.go:168] LocalClient.Create starting
	I0120 13:52:23.348990 1964401 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem
	I0120 13:52:23.349040 1964401 main.go:141] libmachine: Decoding PEM data...
	I0120 13:52:23.349069 1964401 main.go:141] libmachine: Parsing certificate...
	I0120 13:52:23.349155 1964401 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem
	I0120 13:52:23.349188 1964401 main.go:141] libmachine: Decoding PEM data...
	I0120 13:52:23.349207 1964401 main.go:141] libmachine: Parsing certificate...
	I0120 13:52:23.349247 1964401 main.go:141] libmachine: Running pre-create checks...
	I0120 13:52:23.349262 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .PreCreateCheck
	I0120 13:52:23.349596 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetConfigRaw
	I0120 13:52:23.350055 1964401 main.go:141] libmachine: Creating machine...
	I0120 13:52:23.350072 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .Create
	I0120 13:52:23.350240 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) creating KVM machine...
	I0120 13:52:23.350264 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) creating network...
	I0120 13:52:23.351814 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found existing default KVM network
	I0120 13:52:23.353471 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:23.353252 1964522 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:80:c2:b3} reservation:<nil>}
	I0120 13:52:23.354597 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:23.354453 1964522 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:12:34:d0} reservation:<nil>}
	I0120 13:52:23.355768 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:23.355665 1964522 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:30:fa:f3} reservation:<nil>}
	I0120 13:52:23.357078 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:23.356954 1964522 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00039f0b0}
	I0120 13:52:23.357107 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | created network xml: 
	I0120 13:52:23.357121 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | <network>
	I0120 13:52:23.357137 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG |   <name>mk-kubernetes-upgrade-377526</name>
	I0120 13:52:23.357160 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG |   <dns enable='no'/>
	I0120 13:52:23.357175 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG |   
	I0120 13:52:23.357195 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0120 13:52:23.357207 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG |     <dhcp>
	I0120 13:52:23.357484 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0120 13:52:23.357510 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG |     </dhcp>
	I0120 13:52:23.357531 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG |   </ip>
	I0120 13:52:23.357544 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG |   
	I0120 13:52:23.357552 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | </network>
	I0120 13:52:23.357557 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | 
	I0120 13:52:23.363732 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | trying to create private KVM network mk-kubernetes-upgrade-377526 192.168.72.0/24...
	I0120 13:52:23.455764 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | private KVM network mk-kubernetes-upgrade-377526 192.168.72.0/24 created
	I0120 13:52:23.455800 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:23.455701 1964522 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 13:52:23.455820 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) setting up store path in /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526 ...
	I0120 13:52:23.455837 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) building disk image from file:///home/jenkins/minikube-integration/20242-1920423/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 13:52:23.455936 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Downloading /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20242-1920423/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 13:52:23.737165 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:23.736932 1964522 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526/id_rsa...
	I0120 13:52:23.915499 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:23.915355 1964522 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526/kubernetes-upgrade-377526.rawdisk...
	I0120 13:52:23.915542 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | Writing magic tar header
	I0120 13:52:23.915557 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | Writing SSH key tar header
	I0120 13:52:23.915576 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:23.915498 1964522 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526 ...
	I0120 13:52:23.915596 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526
	I0120 13:52:23.915627 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines
	I0120 13:52:23.915646 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 13:52:23.915660 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526 (perms=drwx------)
	I0120 13:52:23.915679 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423/.minikube/machines (perms=drwxr-xr-x)
	I0120 13:52:23.915693 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423/.minikube (perms=drwxr-xr-x)
	I0120 13:52:23.915706 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423
	I0120 13:52:23.915721 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423 (perms=drwxrwxr-x)
	I0120 13:52:23.915736 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 13:52:23.915748 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 13:52:23.915764 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 13:52:23.915778 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) creating domain...
	I0120 13:52:23.915791 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | checking permissions on dir: /home/jenkins
	I0120 13:52:23.915804 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | checking permissions on dir: /home
	I0120 13:52:23.915815 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | skipping /home - not owner
	I0120 13:52:23.917026 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) define libvirt domain using xml: 
	I0120 13:52:23.917060 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) <domain type='kvm'>
	I0120 13:52:23.917073 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)   <name>kubernetes-upgrade-377526</name>
	I0120 13:52:23.917084 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)   <memory unit='MiB'>2200</memory>
	I0120 13:52:23.917093 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)   <vcpu>2</vcpu>
	I0120 13:52:23.917107 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)   <features>
	I0120 13:52:23.917145 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     <acpi/>
	I0120 13:52:23.917166 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     <apic/>
	I0120 13:52:23.917180 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     <pae/>
	I0120 13:52:23.917191 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     
	I0120 13:52:23.917201 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)   </features>
	I0120 13:52:23.917218 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)   <cpu mode='host-passthrough'>
	I0120 13:52:23.917230 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)   
	I0120 13:52:23.917241 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)   </cpu>
	I0120 13:52:23.917252 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)   <os>
	I0120 13:52:23.917263 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     <type>hvm</type>
	I0120 13:52:23.917276 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     <boot dev='cdrom'/>
	I0120 13:52:23.917296 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     <boot dev='hd'/>
	I0120 13:52:23.917331 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     <bootmenu enable='no'/>
	I0120 13:52:23.917341 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)   </os>
	I0120 13:52:23.917351 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)   <devices>
	I0120 13:52:23.917362 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     <disk type='file' device='cdrom'>
	I0120 13:52:23.917401 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)       <source file='/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526/boot2docker.iso'/>
	I0120 13:52:23.917424 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)       <target dev='hdc' bus='scsi'/>
	I0120 13:52:23.917438 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)       <readonly/>
	I0120 13:52:23.917448 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     </disk>
	I0120 13:52:23.917461 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     <disk type='file' device='disk'>
	I0120 13:52:23.917473 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 13:52:23.917495 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)       <source file='/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526/kubernetes-upgrade-377526.rawdisk'/>
	I0120 13:52:23.917512 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)       <target dev='hda' bus='virtio'/>
	I0120 13:52:23.917527 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     </disk>
	I0120 13:52:23.917540 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     <interface type='network'>
	I0120 13:52:23.917553 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)       <source network='mk-kubernetes-upgrade-377526'/>
	I0120 13:52:23.917564 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)       <model type='virtio'/>
	I0120 13:52:23.917576 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     </interface>
	I0120 13:52:23.917592 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     <interface type='network'>
	I0120 13:52:23.917605 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)       <source network='default'/>
	I0120 13:52:23.917617 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)       <model type='virtio'/>
	I0120 13:52:23.917628 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     </interface>
	I0120 13:52:23.917640 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     <serial type='pty'>
	I0120 13:52:23.917653 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)       <target port='0'/>
	I0120 13:52:23.917667 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     </serial>
	I0120 13:52:23.917682 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     <console type='pty'>
	I0120 13:52:23.917700 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)       <target type='serial' port='0'/>
	I0120 13:52:23.917713 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     </console>
	I0120 13:52:23.917724 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     <rng model='virtio'>
	I0120 13:52:23.917759 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)       <backend model='random'>/dev/random</backend>
	I0120 13:52:23.917780 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     </rng>
	I0120 13:52:23.917791 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     
	I0120 13:52:23.917809 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)     
	I0120 13:52:23.917822 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526)   </devices>
	I0120 13:52:23.917829 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) </domain>
	I0120 13:52:23.917852 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) 
	I0120 13:52:23.922926 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:9a:69:e8 in network default
	I0120 13:52:23.923663 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:23.923700 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) starting domain...
	I0120 13:52:23.923722 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) ensuring networks are active...
	I0120 13:52:23.924603 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Ensuring network default is active
	I0120 13:52:23.924974 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Ensuring network mk-kubernetes-upgrade-377526 is active
	I0120 13:52:23.925581 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) getting domain XML...
	I0120 13:52:23.926515 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) creating domain...
	I0120 13:52:25.509284 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) waiting for IP...
	I0120 13:52:25.510337 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:25.510822 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find current IP address of domain kubernetes-upgrade-377526 in network mk-kubernetes-upgrade-377526
	I0120 13:52:25.510851 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:25.510816 1964522 retry.go:31] will retry after 196.372979ms: waiting for domain to come up
	I0120 13:52:25.709292 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:25.709780 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find current IP address of domain kubernetes-upgrade-377526 in network mk-kubernetes-upgrade-377526
	I0120 13:52:25.709803 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:25.709714 1964522 retry.go:31] will retry after 260.448775ms: waiting for domain to come up
	I0120 13:52:25.973954 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:25.973985 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find current IP address of domain kubernetes-upgrade-377526 in network mk-kubernetes-upgrade-377526
	I0120 13:52:25.974001 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:25.971511 1964522 retry.go:31] will retry after 365.712577ms: waiting for domain to come up
	I0120 13:52:26.795600 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:26.796232 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find current IP address of domain kubernetes-upgrade-377526 in network mk-kubernetes-upgrade-377526
	I0120 13:52:26.796266 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:26.796202 1964522 retry.go:31] will retry after 389.16893ms: waiting for domain to come up
	I0120 13:52:27.186710 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:27.187232 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find current IP address of domain kubernetes-upgrade-377526 in network mk-kubernetes-upgrade-377526
	I0120 13:52:27.187277 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:27.187215 1964522 retry.go:31] will retry after 499.451672ms: waiting for domain to come up
	I0120 13:52:27.688872 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:27.689477 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find current IP address of domain kubernetes-upgrade-377526 in network mk-kubernetes-upgrade-377526
	I0120 13:52:27.689510 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:27.689395 1964522 retry.go:31] will retry after 596.611032ms: waiting for domain to come up
	I0120 13:52:28.287198 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:28.287756 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find current IP address of domain kubernetes-upgrade-377526 in network mk-kubernetes-upgrade-377526
	I0120 13:52:28.287798 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:28.287729 1964522 retry.go:31] will retry after 917.95545ms: waiting for domain to come up
	I0120 13:52:29.207573 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:29.208119 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find current IP address of domain kubernetes-upgrade-377526 in network mk-kubernetes-upgrade-377526
	I0120 13:52:29.208141 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:29.208094 1964522 retry.go:31] will retry after 1.039838682s: waiting for domain to come up
	I0120 13:52:30.249266 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:30.249780 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find current IP address of domain kubernetes-upgrade-377526 in network mk-kubernetes-upgrade-377526
	I0120 13:52:30.249811 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:30.249748 1964522 retry.go:31] will retry after 1.721700633s: waiting for domain to come up
	I0120 13:52:31.972912 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:31.973327 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find current IP address of domain kubernetes-upgrade-377526 in network mk-kubernetes-upgrade-377526
	I0120 13:52:31.973395 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:31.973326 1964522 retry.go:31] will retry after 2.221511585s: waiting for domain to come up
	I0120 13:52:34.196083 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:34.196593 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find current IP address of domain kubernetes-upgrade-377526 in network mk-kubernetes-upgrade-377526
	I0120 13:52:34.196621 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:34.196558 1964522 retry.go:31] will retry after 2.390601155s: waiting for domain to come up
	I0120 13:52:36.590474 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:36.591021 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find current IP address of domain kubernetes-upgrade-377526 in network mk-kubernetes-upgrade-377526
	I0120 13:52:36.591043 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:36.591008 1964522 retry.go:31] will retry after 3.308308276s: waiting for domain to come up
	I0120 13:52:39.900957 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:39.901474 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find current IP address of domain kubernetes-upgrade-377526 in network mk-kubernetes-upgrade-377526
	I0120 13:52:39.901523 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:39.901457 1964522 retry.go:31] will retry after 4.466054497s: waiting for domain to come up
	I0120 13:52:44.369659 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:44.370147 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find current IP address of domain kubernetes-upgrade-377526 in network mk-kubernetes-upgrade-377526
	I0120 13:52:44.370173 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | I0120 13:52:44.370113 1964522 retry.go:31] will retry after 4.317693796s: waiting for domain to come up
	I0120 13:52:48.689989 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:48.690497 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) found domain IP: 192.168.72.172
	I0120 13:52:48.690536 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has current primary IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:48.690553 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) reserving static IP address...
	I0120 13:52:48.690847 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-377526", mac: "52:54:00:d6:ec:a6", ip: "192.168.72.172"} in network mk-kubernetes-upgrade-377526
	I0120 13:52:48.771722 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | Getting to WaitForSSH function...
	I0120 13:52:48.771761 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) reserved static IP address 192.168.72.172 for domain kubernetes-upgrade-377526
	I0120 13:52:48.771775 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) waiting for SSH...
	I0120 13:52:48.774639 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:48.775018 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526
	I0120 13:52:48.775047 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-377526 interface with MAC address 52:54:00:d6:ec:a6
	I0120 13:52:48.775211 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | Using SSH client type: external
	I0120 13:52:48.775243 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526/id_rsa (-rw-------)
	I0120 13:52:48.775310 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 13:52:48.775332 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | About to run SSH command:
	I0120 13:52:48.775349 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | exit 0
	I0120 13:52:48.779380 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | SSH cmd err, output: exit status 255: 
	I0120 13:52:48.779405 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0120 13:52:48.779411 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | command : exit 0
	I0120 13:52:48.779416 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | err     : exit status 255
	I0120 13:52:48.779424 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | output  : 
	I0120 13:52:51.779997 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | Getting to WaitForSSH function...
	I0120 13:52:51.782870 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:51.783282 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:51.783328 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:51.783353 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | Using SSH client type: external
	I0120 13:52:51.783392 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526/id_rsa (-rw-------)
	I0120 13:52:51.783457 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 13:52:51.783477 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | About to run SSH command:
	I0120 13:52:51.783487 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | exit 0
	I0120 13:52:51.915428 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | SSH cmd err, output: <nil>: 
	I0120 13:52:51.915727 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) KVM machine creation complete
	I0120 13:52:51.916047 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetConfigRaw
	I0120 13:52:51.916685 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:52:51.916889 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:52:51.917089 1964401 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 13:52:51.917106 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetState
	I0120 13:52:51.918743 1964401 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 13:52:51.918757 1964401 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 13:52:51.918776 1964401 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 13:52:51.918783 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:52:51.921342 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:51.921772 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:51.921802 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:51.921995 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:52:51.922215 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:52:51.922391 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:52:51.922529 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:52:51.922761 1964401 main.go:141] libmachine: Using SSH client type: native
	I0120 13:52:51.922974 1964401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0120 13:52:51.922987 1964401 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 13:52:52.038235 1964401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 13:52:52.038259 1964401 main.go:141] libmachine: Detecting the provisioner...
	I0120 13:52:52.038270 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:52:52.041233 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:52.041643 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:52.041670 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:52.041881 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:52:52.042168 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:52:52.042352 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:52:52.042511 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:52:52.042695 1964401 main.go:141] libmachine: Using SSH client type: native
	I0120 13:52:52.042905 1964401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0120 13:52:52.042919 1964401 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 13:52:52.160032 1964401 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 13:52:52.160139 1964401 main.go:141] libmachine: found compatible host: buildroot
	I0120 13:52:52.160152 1964401 main.go:141] libmachine: Provisioning with buildroot...
	I0120 13:52:52.160161 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetMachineName
	I0120 13:52:52.160469 1964401 buildroot.go:166] provisioning hostname "kubernetes-upgrade-377526"
	I0120 13:52:52.160498 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetMachineName
	I0120 13:52:52.160714 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:52:52.163354 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:52.163784 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:52.163822 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:52.164034 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:52:52.164247 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:52:52.164440 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:52:52.164581 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:52:52.164715 1964401 main.go:141] libmachine: Using SSH client type: native
	I0120 13:52:52.164906 1964401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0120 13:52:52.164920 1964401 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-377526 && echo "kubernetes-upgrade-377526" | sudo tee /etc/hostname
	I0120 13:52:52.300374 1964401 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-377526
	
	I0120 13:52:52.300404 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:52:52.303136 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:52.303507 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:52.303562 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:52.303738 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:52:52.303951 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:52:52.304104 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:52:52.304277 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:52:52.304456 1964401 main.go:141] libmachine: Using SSH client type: native
	I0120 13:52:52.304652 1964401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0120 13:52:52.304676 1964401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-377526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-377526/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-377526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 13:52:52.432924 1964401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 13:52:52.432961 1964401 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 13:52:52.432981 1964401 buildroot.go:174] setting up certificates
	I0120 13:52:52.432995 1964401 provision.go:84] configureAuth start
	I0120 13:52:52.433008 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetMachineName
	I0120 13:52:52.433370 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetIP
	I0120 13:52:52.436460 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:52.436966 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:52.436998 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:52.437201 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:52:52.440341 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:52.440706 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:52.440740 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:52.440884 1964401 provision.go:143] copyHostCerts
	I0120 13:52:52.440951 1964401 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem, removing ...
	I0120 13:52:52.440972 1964401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem
	I0120 13:52:52.441051 1964401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 13:52:52.441164 1964401 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem, removing ...
	I0120 13:52:52.441175 1964401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem
	I0120 13:52:52.441197 1964401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 13:52:52.441251 1964401 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem, removing ...
	I0120 13:52:52.441258 1964401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem
	I0120 13:52:52.441275 1964401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 13:52:52.441320 1964401 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-377526 san=[127.0.0.1 192.168.72.172 kubernetes-upgrade-377526 localhost minikube]
	I0120 13:52:52.707822 1964401 provision.go:177] copyRemoteCerts
	I0120 13:52:52.707887 1964401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 13:52:52.707914 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:52:52.711059 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:52.711405 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:52.711431 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:52.711599 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:52:52.711815 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:52:52.711959 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:52:52.712206 1964401 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526/id_rsa Username:docker}
	I0120 13:52:52.802870 1964401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 13:52:52.829402 1964401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0120 13:52:52.855593 1964401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 13:52:52.884348 1964401 provision.go:87] duration metric: took 451.335809ms to configureAuth
	I0120 13:52:52.884379 1964401 buildroot.go:189] setting minikube options for container-runtime
	I0120 13:52:52.884562 1964401 config.go:182] Loaded profile config "kubernetes-upgrade-377526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 13:52:52.884661 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:52:52.887546 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:52.887901 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:52.887933 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:52.888067 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:52:52.888303 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:52:52.888496 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:52:52.888669 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:52:52.888948 1964401 main.go:141] libmachine: Using SSH client type: native
	I0120 13:52:52.889151 1964401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0120 13:52:52.889169 1964401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 13:52:53.125489 1964401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 13:52:53.125529 1964401 main.go:141] libmachine: Checking connection to Docker...
	I0120 13:52:53.125538 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetURL
	I0120 13:52:53.127002 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | using libvirt version 6000000
	I0120 13:52:53.129509 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:53.129840 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:53.129892 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:53.130064 1964401 main.go:141] libmachine: Docker is up and running!
	I0120 13:52:53.130078 1964401 main.go:141] libmachine: Reticulating splines...
	I0120 13:52:53.130085 1964401 client.go:171] duration metric: took 29.781141459s to LocalClient.Create
	I0120 13:52:53.130108 1964401 start.go:167] duration metric: took 29.781213044s to libmachine.API.Create "kubernetes-upgrade-377526"
	I0120 13:52:53.130118 1964401 start.go:293] postStartSetup for "kubernetes-upgrade-377526" (driver="kvm2")
	I0120 13:52:53.130132 1964401 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 13:52:53.130158 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:52:53.130414 1964401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 13:52:53.130448 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:52:53.132598 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:53.132892 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:53.132929 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:53.133047 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:52:53.133244 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:52:53.133364 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:52:53.133508 1964401 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526/id_rsa Username:docker}
	I0120 13:52:53.220932 1964401 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 13:52:53.225822 1964401 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 13:52:53.225864 1964401 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 13:52:53.225946 1964401 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 13:52:53.226066 1964401 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem -> 19276722.pem in /etc/ssl/certs
	I0120 13:52:53.226169 1964401 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 13:52:53.236035 1964401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 13:52:53.262567 1964401 start.go:296] duration metric: took 132.428461ms for postStartSetup
	I0120 13:52:53.262661 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetConfigRaw
	I0120 13:52:53.263348 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetIP
	I0120 13:52:53.265990 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:53.266375 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:53.266407 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:53.266739 1964401 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/config.json ...
	I0120 13:52:53.266973 1964401 start.go:128] duration metric: took 29.942319741s to createHost
	I0120 13:52:53.267002 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:52:53.269773 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:53.270196 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:53.270241 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:53.270368 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:52:53.270565 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:52:53.270744 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:52:53.270886 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:52:53.271019 1964401 main.go:141] libmachine: Using SSH client type: native
	I0120 13:52:53.271194 1964401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0120 13:52:53.271217 1964401 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 13:52:53.387985 1964401 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381173.366320641
	
	I0120 13:52:53.388013 1964401 fix.go:216] guest clock: 1737381173.366320641
	I0120 13:52:53.388024 1964401 fix.go:229] Guest: 2025-01-20 13:52:53.366320641 +0000 UTC Remote: 2025-01-20 13:52:53.26698583 +0000 UTC m=+36.165903756 (delta=99.334811ms)
	I0120 13:52:53.388057 1964401 fix.go:200] guest clock delta is within tolerance: 99.334811ms
	I0120 13:52:53.388063 1964401 start.go:83] releasing machines lock for "kubernetes-upgrade-377526", held for 30.063572642s
	I0120 13:52:53.388090 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:52:53.388409 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetIP
	I0120 13:52:53.391871 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:53.392341 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:53.392373 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:53.392578 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:52:53.393160 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:52:53.393324 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:52:53.393401 1964401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 13:52:53.393451 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:52:53.393521 1964401 ssh_runner.go:195] Run: cat /version.json
	I0120 13:52:53.393551 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:52:53.396299 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:53.396679 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:53.396721 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:53.396765 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:53.396846 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:52:53.397035 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:52:53.397154 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:53.397182 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:53.397199 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:52:53.397355 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:52:53.397406 1964401 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526/id_rsa Username:docker}
	I0120 13:52:53.397492 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:52:53.397651 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:52:53.397792 1964401 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526/id_rsa Username:docker}
	I0120 13:52:53.480371 1964401 ssh_runner.go:195] Run: systemctl --version
	I0120 13:52:53.514036 1964401 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 13:52:53.676919 1964401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 13:52:53.684135 1964401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 13:52:53.684218 1964401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 13:52:53.701152 1964401 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 13:52:53.701187 1964401 start.go:495] detecting cgroup driver to use...
	I0120 13:52:53.701280 1964401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 13:52:53.718301 1964401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 13:52:53.733765 1964401 docker.go:217] disabling cri-docker service (if available) ...
	I0120 13:52:53.733840 1964401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 13:52:53.748146 1964401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 13:52:53.762485 1964401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 13:52:53.877823 1964401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 13:52:54.026273 1964401 docker.go:233] disabling docker service ...
	I0120 13:52:54.026361 1964401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 13:52:54.042634 1964401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 13:52:54.056868 1964401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 13:52:54.192867 1964401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 13:52:54.316156 1964401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 13:52:54.331398 1964401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 13:52:54.351970 1964401 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0120 13:52:54.352067 1964401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:52:54.364107 1964401 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 13:52:54.364186 1964401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:52:54.375531 1964401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:52:54.386969 1964401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:52:54.399484 1964401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 13:52:54.412888 1964401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 13:52:54.423491 1964401 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 13:52:54.423569 1964401 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 13:52:54.438395 1964401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 13:52:54.449157 1964401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 13:52:54.574856 1964401 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 13:52:54.674632 1964401 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 13:52:54.674717 1964401 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 13:52:54.679948 1964401 start.go:563] Will wait 60s for crictl version
	I0120 13:52:54.680023 1964401 ssh_runner.go:195] Run: which crictl
	I0120 13:52:54.684138 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 13:52:54.733452 1964401 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 13:52:54.733546 1964401 ssh_runner.go:195] Run: crio --version
	I0120 13:52:54.766439 1964401 ssh_runner.go:195] Run: crio --version
	I0120 13:52:54.799401 1964401 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0120 13:52:54.800864 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetIP
	I0120 13:52:54.804216 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:54.804668 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:52:39 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:52:54.804695 1964401 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:52:54.804920 1964401 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 13:52:54.809346 1964401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 13:52:54.822568 1964401 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-377526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-377526 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.172 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 13:52:54.822706 1964401 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 13:52:54.822766 1964401 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 13:52:54.858180 1964401 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 13:52:54.858267 1964401 ssh_runner.go:195] Run: which lz4
	I0120 13:52:54.862799 1964401 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 13:52:54.867525 1964401 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 13:52:54.867552 1964401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0120 13:52:56.754596 1964401 crio.go:462] duration metric: took 1.891848289s to copy over tarball
	I0120 13:52:56.754715 1964401 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 13:52:59.605713 1964401 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.850956133s)
	I0120 13:52:59.605747 1964401 crio.go:469] duration metric: took 2.851094948s to extract the tarball
	I0120 13:52:59.605758 1964401 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 13:52:59.661366 1964401 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 13:52:59.732180 1964401 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 13:52:59.732221 1964401 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 13:52:59.732310 1964401 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:52:59.732632 1964401 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 13:52:59.732821 1964401 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 13:52:59.732973 1964401 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 13:52:59.733188 1964401 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0120 13:52:59.733108 1964401 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0120 13:52:59.733104 1964401 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0120 13:52:59.733171 1964401 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 13:52:59.735657 1964401 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 13:52:59.735740 1964401 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0120 13:52:59.735740 1964401 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 13:52:59.735996 1964401 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0120 13:52:59.736056 1964401 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0120 13:52:59.736003 1964401 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 13:52:59.736193 1964401 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 13:52:59.736209 1964401 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:52:59.914300 1964401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 13:52:59.921812 1964401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0120 13:52:59.926083 1964401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0120 13:52:59.929854 1964401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0120 13:52:59.956894 1964401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0120 13:52:59.964738 1964401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0120 13:52:59.989286 1964401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0120 13:53:00.046209 1964401 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0120 13:53:00.046269 1964401 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0120 13:53:00.046282 1964401 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 13:53:00.046313 1964401 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 13:53:00.046342 1964401 ssh_runner.go:195] Run: which crictl
	I0120 13:53:00.046366 1964401 ssh_runner.go:195] Run: which crictl
	I0120 13:53:00.057705 1964401 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0120 13:53:00.057763 1964401 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 13:53:00.057832 1964401 ssh_runner.go:195] Run: which crictl
	I0120 13:53:00.103921 1964401 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0120 13:53:00.103993 1964401 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0120 13:53:00.104054 1964401 ssh_runner.go:195] Run: which crictl
	I0120 13:53:00.130993 1964401 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0120 13:53:00.131063 1964401 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0120 13:53:00.131119 1964401 ssh_runner.go:195] Run: which crictl
	I0120 13:53:00.135039 1964401 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0120 13:53:00.135090 1964401 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 13:53:00.135136 1964401 ssh_runner.go:195] Run: which crictl
	I0120 13:53:00.135144 1964401 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0120 13:53:00.135193 1964401 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0120 13:53:00.135265 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 13:53:00.135311 1964401 ssh_runner.go:195] Run: which crictl
	I0120 13:53:00.135312 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 13:53:00.135346 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 13:53:00.135270 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 13:53:00.137761 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 13:53:00.214425 1964401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:53:00.259721 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 13:53:00.259748 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 13:53:00.259815 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 13:53:00.259843 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 13:53:00.259820 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 13:53:00.259866 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 13:53:00.273185 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 13:53:00.502265 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 13:53:00.502482 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 13:53:00.502569 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 13:53:00.515316 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 13:53:00.515372 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 13:53:00.515521 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 13:53:00.515591 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 13:53:00.717300 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 13:53:00.717411 1964401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 13:53:00.717471 1964401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0120 13:53:00.717529 1964401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0120 13:53:00.717541 1964401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0120 13:53:00.717628 1964401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0120 13:53:00.717649 1964401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0120 13:53:00.777147 1964401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0120 13:53:00.777156 1964401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0120 13:53:00.777263 1964401 cache_images.go:92] duration metric: took 1.0450224s to LoadCachedImages
	W0120 13:53:00.777380 1964401 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0120 13:53:00.777401 1964401 kubeadm.go:934] updating node { 192.168.72.172 8443 v1.20.0 crio true true} ...
	I0120 13:53:00.777537 1964401 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-377526 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-377526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 13:53:00.777637 1964401 ssh_runner.go:195] Run: crio config
	I0120 13:53:00.848395 1964401 cni.go:84] Creating CNI manager for ""
	I0120 13:53:00.848426 1964401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 13:53:00.848440 1964401 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 13:53:00.848470 1964401 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.172 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-377526 NodeName:kubernetes-upgrade-377526 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 13:53:00.848683 1964401 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-377526"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 13:53:00.848781 1964401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 13:53:00.862657 1964401 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 13:53:00.862745 1964401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 13:53:00.875891 1964401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0120 13:53:00.899625 1964401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 13:53:00.921264 1964401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0120 13:53:00.942984 1964401 ssh_runner.go:195] Run: grep 192.168.72.172	control-plane.minikube.internal$ /etc/hosts
	I0120 13:53:00.947615 1964401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 13:53:00.962661 1964401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 13:53:01.109399 1964401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 13:53:01.131088 1964401 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526 for IP: 192.168.72.172
	I0120 13:53:01.131118 1964401 certs.go:194] generating shared ca certs ...
	I0120 13:53:01.131146 1964401 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:53:01.131356 1964401 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 13:53:01.131422 1964401 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 13:53:01.131439 1964401 certs.go:256] generating profile certs ...
	I0120 13:53:01.131518 1964401 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/client.key
	I0120 13:53:01.131543 1964401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/client.crt with IP's: []
	I0120 13:53:01.412383 1964401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/client.crt ...
	I0120 13:53:01.412424 1964401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/client.crt: {Name:mk1a3a265a829f6bf24dc13b480d658bf1b206e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:53:01.412631 1964401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/client.key ...
	I0120 13:53:01.412651 1964401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/client.key: {Name:mk6a05b659ffac608d6f9200f1a1d4dae5fdab56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:53:01.412773 1964401 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/apiserver.key.f286f221
	I0120 13:53:01.412800 1964401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/apiserver.crt.f286f221 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.172]
	I0120 13:53:01.588835 1964401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/apiserver.crt.f286f221 ...
	I0120 13:53:01.588872 1964401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/apiserver.crt.f286f221: {Name:mk62c6d209ddf944ea818c3ef4954f44939b6dec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:53:01.589074 1964401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/apiserver.key.f286f221 ...
	I0120 13:53:01.589093 1964401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/apiserver.key.f286f221: {Name:mkd3eafa2d3ad2433074cd148b9ac4e3b5be859c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:53:01.589209 1964401 certs.go:381] copying /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/apiserver.crt.f286f221 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/apiserver.crt
	I0120 13:53:01.589309 1964401 certs.go:385] copying /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/apiserver.key.f286f221 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/apiserver.key
	I0120 13:53:01.589393 1964401 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/proxy-client.key
	I0120 13:53:01.589418 1964401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/proxy-client.crt with IP's: []
	I0120 13:53:01.696898 1964401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/proxy-client.crt ...
	I0120 13:53:01.696932 1964401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/proxy-client.crt: {Name:mkeb7dd472c0899a1858b2efe0a06f6cd77957fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:53:01.697120 1964401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/proxy-client.key ...
	I0120 13:53:01.697146 1964401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/proxy-client.key: {Name:mk2a3b3fab6c78f3ad4718ea974b64a150f312a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:53:01.697355 1964401 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem (1338 bytes)
	W0120 13:53:01.697409 1964401 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672_empty.pem, impossibly tiny 0 bytes
	I0120 13:53:01.697425 1964401 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 13:53:01.697462 1964401 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 13:53:01.697496 1964401 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 13:53:01.697530 1964401 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 13:53:01.697589 1964401 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 13:53:01.698167 1964401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 13:53:01.729876 1964401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 13:53:01.757238 1964401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 13:53:01.785565 1964401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 13:53:01.811848 1964401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0120 13:53:01.838884 1964401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 13:53:01.868665 1964401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 13:53:01.897161 1964401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 13:53:01.923897 1964401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /usr/share/ca-certificates/19276722.pem (1708 bytes)
	I0120 13:53:01.966710 1964401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 13:53:02.006641 1964401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem --> /usr/share/ca-certificates/1927672.pem (1338 bytes)
	I0120 13:53:02.047335 1964401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 13:53:02.075294 1964401 ssh_runner.go:195] Run: openssl version
	I0120 13:53:02.085328 1964401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 13:53:02.107478 1964401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:53:02.114675 1964401 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:53:02.114767 1964401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:53:02.132064 1964401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 13:53:02.147972 1964401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1927672.pem && ln -fs /usr/share/ca-certificates/1927672.pem /etc/ssl/certs/1927672.pem"
	I0120 13:53:02.160411 1964401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1927672.pem
	I0120 13:53:02.166995 1964401 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:58 /usr/share/ca-certificates/1927672.pem
	I0120 13:53:02.167112 1964401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1927672.pem
	I0120 13:53:02.174233 1964401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1927672.pem /etc/ssl/certs/51391683.0"
	I0120 13:53:02.186577 1964401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19276722.pem && ln -fs /usr/share/ca-certificates/19276722.pem /etc/ssl/certs/19276722.pem"
	I0120 13:53:02.199623 1964401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19276722.pem
	I0120 13:53:02.205458 1964401 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:58 /usr/share/ca-certificates/19276722.pem
	I0120 13:53:02.205543 1964401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19276722.pem
	I0120 13:53:02.212448 1964401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19276722.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 13:53:02.224694 1964401 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 13:53:02.230076 1964401 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 13:53:02.230145 1964401 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-377526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-377526 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.172 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:53:02.230229 1964401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 13:53:02.230289 1964401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 13:53:02.270686 1964401 cri.go:89] found id: ""
	I0120 13:53:02.270790 1964401 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 13:53:02.282929 1964401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 13:53:02.297144 1964401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 13:53:02.309399 1964401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 13:53:02.309424 1964401 kubeadm.go:157] found existing configuration files:
	
	I0120 13:53:02.309487 1964401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 13:53:02.319723 1964401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 13:53:02.319798 1964401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 13:53:02.330196 1964401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 13:53:02.340788 1964401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 13:53:02.340954 1964401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 13:53:02.352598 1964401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 13:53:02.363190 1964401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 13:53:02.363256 1964401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 13:53:02.378613 1964401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 13:53:02.389332 1964401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 13:53:02.389413 1964401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 13:53:02.399920 1964401 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 13:53:02.536571 1964401 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 13:53:02.536744 1964401 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 13:53:02.706683 1964401 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 13:53:02.706866 1964401 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 13:53:02.707050 1964401 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 13:53:02.898848 1964401 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 13:53:02.900863 1964401 out.go:235]   - Generating certificates and keys ...
	I0120 13:53:02.900981 1964401 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 13:53:02.901078 1964401 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 13:53:03.045157 1964401 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 13:53:03.131854 1964401 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 13:53:03.438904 1964401 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 13:53:03.508697 1964401 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 13:53:03.651434 1964401 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 13:53:03.651671 1964401 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-377526 localhost] and IPs [192.168.72.172 127.0.0.1 ::1]
	I0120 13:53:04.198557 1964401 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 13:53:04.198860 1964401 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-377526 localhost] and IPs [192.168.72.172 127.0.0.1 ::1]
	I0120 13:53:04.645346 1964401 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 13:53:04.898110 1964401 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 13:53:05.297581 1964401 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 13:53:05.297687 1964401 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 13:53:05.577433 1964401 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 13:53:05.796703 1964401 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 13:53:05.874907 1964401 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 13:53:05.944797 1964401 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 13:53:05.981644 1964401 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 13:53:05.983640 1964401 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 13:53:05.983717 1964401 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 13:53:06.189941 1964401 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 13:53:06.192301 1964401 out.go:235]   - Booting up control plane ...
	I0120 13:53:06.192422 1964401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 13:53:06.203121 1964401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 13:53:06.203263 1964401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 13:53:06.206421 1964401 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 13:53:06.214780 1964401 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 13:53:46.211333 1964401 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 13:53:46.212310 1964401 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:53:46.212554 1964401 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:53:51.212811 1964401 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:53:51.213113 1964401 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:54:01.212770 1964401 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:54:01.213086 1964401 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:54:21.213265 1964401 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:54:21.213583 1964401 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:55:01.216017 1964401 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:55:01.216941 1964401 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:55:01.216966 1964401 kubeadm.go:310] 
	I0120 13:55:01.217090 1964401 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 13:55:01.217184 1964401 kubeadm.go:310] 		timed out waiting for the condition
	I0120 13:55:01.217212 1964401 kubeadm.go:310] 
	I0120 13:55:01.217294 1964401 kubeadm.go:310] 	This error is likely caused by:
	I0120 13:55:01.217366 1964401 kubeadm.go:310] 		- The kubelet is not running
	I0120 13:55:01.217595 1964401 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 13:55:01.217606 1964401 kubeadm.go:310] 
	I0120 13:55:01.217830 1964401 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 13:55:01.217908 1964401 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 13:55:01.217976 1964401 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 13:55:01.217983 1964401 kubeadm.go:310] 
	I0120 13:55:01.218241 1964401 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 13:55:01.218420 1964401 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 13:55:01.218429 1964401 kubeadm.go:310] 
	I0120 13:55:01.218672 1964401 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 13:55:01.219832 1964401 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 13:55:01.220367 1964401 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 13:55:01.220491 1964401 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 13:55:01.220529 1964401 kubeadm.go:310] 
	I0120 13:55:01.220686 1964401 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 13:55:01.220804 1964401 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	W0120 13:55:01.221117 1964401 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-377526 localhost] and IPs [192.168.72.172 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-377526 localhost] and IPs [192.168.72.172 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-377526 localhost] and IPs [192.168.72.172 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-377526 localhost] and IPs [192.168.72.172 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0120 13:55:01.221175 1964401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 13:55:01.221385 1964401 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 13:55:02.098964 1964401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:55:02.123169 1964401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 13:55:02.139143 1964401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 13:55:02.139175 1964401 kubeadm.go:157] found existing configuration files:
	
	I0120 13:55:02.139258 1964401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 13:55:02.154339 1964401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 13:55:02.154441 1964401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 13:55:02.171274 1964401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 13:55:02.194549 1964401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 13:55:02.194656 1964401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 13:55:02.215687 1964401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 13:55:02.231003 1964401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 13:55:02.231079 1964401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 13:55:02.247355 1964401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 13:55:02.264175 1964401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 13:55:02.264267 1964401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 13:55:02.281220 1964401 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 13:55:02.553760 1964401 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 13:56:58.803484 1964401 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 13:56:58.803615 1964401 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 13:56:58.805286 1964401 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 13:56:58.805362 1964401 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 13:56:58.805464 1964401 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 13:56:58.805603 1964401 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 13:56:58.805758 1964401 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 13:56:58.805838 1964401 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 13:56:58.807730 1964401 out.go:235]   - Generating certificates and keys ...
	I0120 13:56:58.807839 1964401 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 13:56:58.807925 1964401 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 13:56:58.808067 1964401 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 13:56:58.808170 1964401 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 13:56:58.808279 1964401 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 13:56:58.808375 1964401 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 13:56:58.808461 1964401 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 13:56:58.808551 1964401 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 13:56:58.808670 1964401 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 13:56:58.808781 1964401 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 13:56:58.808852 1964401 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 13:56:58.808932 1964401 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 13:56:58.809004 1964401 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 13:56:58.809069 1964401 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 13:56:58.809125 1964401 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 13:56:58.809177 1964401 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 13:56:58.809288 1964401 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 13:56:58.809431 1964401 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 13:56:58.809496 1964401 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 13:56:58.809590 1964401 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 13:56:58.810893 1964401 out.go:235]   - Booting up control plane ...
	I0120 13:56:58.811011 1964401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 13:56:58.811140 1964401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 13:56:58.811229 1964401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 13:56:58.811298 1964401 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 13:56:58.811485 1964401 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 13:56:58.811526 1964401 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 13:56:58.811578 1964401 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:56:58.811745 1964401 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:56:58.811847 1964401 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:56:58.812039 1964401 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:56:58.812137 1964401 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:56:58.812407 1964401 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:56:58.812506 1964401 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:56:58.812763 1964401 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:56:58.812844 1964401 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:56:58.813105 1964401 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:56:58.813121 1964401 kubeadm.go:310] 
	I0120 13:56:58.813176 1964401 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 13:56:58.813211 1964401 kubeadm.go:310] 		timed out waiting for the condition
	I0120 13:56:58.813217 1964401 kubeadm.go:310] 
	I0120 13:56:58.813246 1964401 kubeadm.go:310] 	This error is likely caused by:
	I0120 13:56:58.813298 1964401 kubeadm.go:310] 		- The kubelet is not running
	I0120 13:56:58.813461 1964401 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 13:56:58.813477 1964401 kubeadm.go:310] 
	I0120 13:56:58.813649 1964401 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 13:56:58.813714 1964401 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 13:56:58.813771 1964401 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 13:56:58.813786 1964401 kubeadm.go:310] 
	I0120 13:56:58.813941 1964401 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 13:56:58.814057 1964401 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 13:56:58.814072 1964401 kubeadm.go:310] 
	I0120 13:56:58.814210 1964401 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 13:56:58.814322 1964401 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 13:56:58.814441 1964401 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 13:56:58.814552 1964401 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 13:56:58.814594 1964401 kubeadm.go:310] 
	I0120 13:56:58.814642 1964401 kubeadm.go:394] duration metric: took 3m56.584502828s to StartCluster
	I0120 13:56:58.814696 1964401 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 13:56:58.814768 1964401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 13:56:58.859971 1964401 cri.go:89] found id: ""
	I0120 13:56:58.860004 1964401 logs.go:282] 0 containers: []
	W0120 13:56:58.860015 1964401 logs.go:284] No container was found matching "kube-apiserver"
	I0120 13:56:58.860025 1964401 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 13:56:58.860088 1964401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 13:56:58.895067 1964401 cri.go:89] found id: ""
	I0120 13:56:58.895098 1964401 logs.go:282] 0 containers: []
	W0120 13:56:58.895106 1964401 logs.go:284] No container was found matching "etcd"
	I0120 13:56:58.895112 1964401 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 13:56:58.895167 1964401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 13:56:58.934977 1964401 cri.go:89] found id: ""
	I0120 13:56:58.935011 1964401 logs.go:282] 0 containers: []
	W0120 13:56:58.935023 1964401 logs.go:284] No container was found matching "coredns"
	I0120 13:56:58.935031 1964401 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 13:56:58.935098 1964401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 13:56:58.981247 1964401 cri.go:89] found id: ""
	I0120 13:56:58.981284 1964401 logs.go:282] 0 containers: []
	W0120 13:56:58.981297 1964401 logs.go:284] No container was found matching "kube-scheduler"
	I0120 13:56:58.981305 1964401 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 13:56:58.981377 1964401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 13:56:59.025552 1964401 cri.go:89] found id: ""
	I0120 13:56:59.025589 1964401 logs.go:282] 0 containers: []
	W0120 13:56:59.025601 1964401 logs.go:284] No container was found matching "kube-proxy"
	I0120 13:56:59.025609 1964401 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 13:56:59.025677 1964401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 13:56:59.067740 1964401 cri.go:89] found id: ""
	I0120 13:56:59.067773 1964401 logs.go:282] 0 containers: []
	W0120 13:56:59.067783 1964401 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 13:56:59.067792 1964401 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 13:56:59.067857 1964401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 13:56:59.114323 1964401 cri.go:89] found id: ""
	I0120 13:56:59.114359 1964401 logs.go:282] 0 containers: []
	W0120 13:56:59.114372 1964401 logs.go:284] No container was found matching "kindnet"
	I0120 13:56:59.114387 1964401 logs.go:123] Gathering logs for kubelet ...
	I0120 13:56:59.114404 1964401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 13:56:59.166008 1964401 logs.go:123] Gathering logs for dmesg ...
	I0120 13:56:59.166054 1964401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 13:56:59.183873 1964401 logs.go:123] Gathering logs for describe nodes ...
	I0120 13:56:59.183917 1964401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 13:56:59.330443 1964401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 13:56:59.330476 1964401 logs.go:123] Gathering logs for CRI-O ...
	I0120 13:56:59.330492 1964401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 13:56:59.435314 1964401 logs.go:123] Gathering logs for container status ...
	I0120 13:56:59.435358 1964401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0120 13:56:59.481989 1964401 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0120 13:56:59.482062 1964401 out.go:270] * 
	* 
	W0120 13:56:59.482126 1964401 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 13:56:59.482144 1964401 out.go:270] * 
	* 
	W0120 13:56:59.483275 1964401 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 13:56:59.487044 1964401 out.go:201] 
	W0120 13:56:59.488300 1964401 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 13:56:59.488367 1964401 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0120 13:56:59.488400 1964401 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0120 13:56:59.489899 1964401 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-377526 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-377526
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-377526: (1.450837865s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-377526 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-377526 status --format={{.Host}}: exit status 7 (87.923924ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-377526 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-377526 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (35.948643359s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-377526 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-377526 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-377526 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (89.322352ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-377526] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-377526
	    minikube start -p kubernetes-upgrade-377526 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3775262 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.0, by running:
	    
	    minikube start -p kubernetes-upgrade-377526 --kubernetes-version=v1.32.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-377526 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-377526 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.698286848s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-01-20 13:58:21.891759096 +0000 UTC m=+4092.909662426
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-377526 -n kubernetes-upgrade-377526
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-377526 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-377526 logs -n 25: (1.874905599s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-038404                             | cert-expiration-038404    | jenkins | v1.35.0 | 20 Jan 25 13:52 UTC | 20 Jan 25 13:53 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-934502                             | running-upgrade-934502    | jenkins | v1.35.0 | 20 Jan 25 13:53 UTC | 20 Jan 25 13:53 UTC |
	| start   | -p force-systemd-flag-821407                          | force-systemd-flag-821407 | jenkins | v1.35.0 | 20 Jan 25 13:53 UTC | 20 Jan 25 13:54 UTC |
	|         | --memory=2048 --force-systemd                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-821407 ssh cat                     | force-systemd-flag-821407 | jenkins | v1.35.0 | 20 Jan 25 13:54 UTC | 20 Jan 25 13:54 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-821407                          | force-systemd-flag-821407 | jenkins | v1.35.0 | 20 Jan 25 13:54 UTC | 20 Jan 25 13:54 UTC |
	| start   | -p stopped-upgrade-795137                             | minikube                  | jenkins | v1.26.0 | 20 Jan 25 13:54 UTC | 20 Jan 25 13:54 UTC |
	|         | --memory=2200 --vm-driver=kvm2                        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-926915                                | NoKubernetes-926915       | jenkins | v1.35.0 | 20 Jan 25 13:54 UTC | 20 Jan 25 13:54 UTC |
	| start   | -p cert-options-833776                                | cert-options-833776       | jenkins | v1.35.0 | 20 Jan 25 13:54 UTC | 20 Jan 25 13:55 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-795137 stop                           | minikube                  | jenkins | v1.26.0 | 20 Jan 25 13:54 UTC | 20 Jan 25 13:55 UTC |
	| start   | -p stopped-upgrade-795137                             | stopped-upgrade-795137    | jenkins | v1.35.0 | 20 Jan 25 13:55 UTC | 20 Jan 25 13:55 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | cert-options-833776 ssh                               | cert-options-833776       | jenkins | v1.35.0 | 20 Jan 25 13:55 UTC | 20 Jan 25 13:55 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |         |                     |                     |
	| ssh     | -p cert-options-833776 -- sudo                        | cert-options-833776       | jenkins | v1.35.0 | 20 Jan 25 13:55 UTC | 20 Jan 25 13:55 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |         |                     |                     |
	| delete  | -p cert-options-833776                                | cert-options-833776       | jenkins | v1.35.0 | 20 Jan 25 13:55 UTC | 20 Jan 25 13:55 UTC |
	| start   | -p old-k8s-version-191446                             | old-k8s-version-191446    | jenkins | v1.35.0 | 20 Jan 25 13:55 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-795137                             | stopped-upgrade-795137    | jenkins | v1.35.0 | 20 Jan 25 13:55 UTC | 20 Jan 25 13:55 UTC |
	| start   | -p no-preload-648067                                  | no-preload-648067         | jenkins | v1.35.0 | 20 Jan 25 13:55 UTC | 20 Jan 25 13:57 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                          |                           |         |         |                     |                     |
	| start   | -p cert-expiration-038404                             | cert-expiration-038404    | jenkins | v1.35.0 | 20 Jan 25 13:56 UTC | 20 Jan 25 13:57 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                               |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-377526                          | kubernetes-upgrade-377526 | jenkins | v1.35.0 | 20 Jan 25 13:56 UTC | 20 Jan 25 13:57 UTC |
	| start   | -p kubernetes-upgrade-377526                          | kubernetes-upgrade-377526 | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:57 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-038404                             | cert-expiration-038404    | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:57 UTC |
	| start   | -p embed-certs-647109                                 | embed-certs-647109        | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:58 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-377526                          | kubernetes-upgrade-377526 | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-377526                          | kubernetes-upgrade-377526 | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:58 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-648067            | no-preload-648067         | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p no-preload-648067                                  | no-preload-648067         | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 13:57:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 13:57:37.238403 1968767 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:57:37.238526 1968767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:57:37.238538 1968767 out.go:358] Setting ErrFile to fd 2...
	I0120 13:57:37.238545 1968767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:57:37.238799 1968767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 13:57:37.239427 1968767 out.go:352] Setting JSON to false
	I0120 13:57:37.240492 1968767 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":20403,"bootTime":1737361054,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 13:57:37.240610 1968767 start.go:139] virtualization: kvm guest
	I0120 13:57:37.242689 1968767 out.go:177] * [kubernetes-upgrade-377526] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 13:57:37.244277 1968767 notify.go:220] Checking for updates...
	I0120 13:57:37.244310 1968767 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 13:57:37.245891 1968767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 13:57:37.247358 1968767 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 13:57:37.248743 1968767 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 13:57:37.250088 1968767 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 13:57:37.251391 1968767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 13:57:37.253131 1968767 config.go:182] Loaded profile config "kubernetes-upgrade-377526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 13:57:37.253518 1968767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:57:37.253578 1968767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:57:37.270937 1968767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
	I0120 13:57:37.271395 1968767 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:57:37.272029 1968767 main.go:141] libmachine: Using API Version  1
	I0120 13:57:37.272053 1968767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:57:37.272385 1968767 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:57:37.272583 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:57:37.272805 1968767 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 13:57:37.273131 1968767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:57:37.273182 1968767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:57:37.288835 1968767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42149
	I0120 13:57:37.289330 1968767 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:57:37.289820 1968767 main.go:141] libmachine: Using API Version  1
	I0120 13:57:37.289846 1968767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:57:37.290289 1968767 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:57:37.290513 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:57:37.328130 1968767 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 13:57:37.329195 1968767 start.go:297] selected driver: kvm2
	I0120 13:57:37.329209 1968767 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-377526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kubernetes-up
grade-377526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.172 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:57:37.329356 1968767 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 13:57:37.330090 1968767 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:57:37.330172 1968767 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-1920423/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 13:57:37.348275 1968767 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 13:57:37.348827 1968767 cni.go:84] Creating CNI manager for ""
	I0120 13:57:37.348906 1968767 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 13:57:37.348958 1968767 start.go:340] cluster config:
	{Name:kubernetes-upgrade-377526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kubernetes-upgrade-377526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.172 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:57:37.349168 1968767 iso.go:125] acquiring lock: {Name:mk18c81b0efa5cc8efe9d47dc52684752f206ae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:57:37.351094 1968767 out.go:177] * Starting "kubernetes-upgrade-377526" primary control-plane node in "kubernetes-upgrade-377526" cluster
	I0120 13:57:36.303598 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:36.304191 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 13:57:36.304216 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 13:57:36.304161 1968488 retry.go:31] will retry after 3.147875727s: waiting for domain to come up
	I0120 13:57:39.455274 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:39.455685 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 13:57:39.455717 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 13:57:39.455643 1968488 retry.go:31] will retry after 3.742393081s: waiting for domain to come up
	I0120 13:57:37.352358 1968767 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 13:57:37.352400 1968767 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 13:57:37.352415 1968767 cache.go:56] Caching tarball of preloaded images
	I0120 13:57:37.352522 1968767 preload.go:172] Found /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 13:57:37.352537 1968767 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 13:57:37.352641 1968767 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/config.json ...
	I0120 13:57:37.352849 1968767 start.go:360] acquireMachinesLock for kubernetes-upgrade-377526: {Name:mkbf148c6fec0c722eed081be3cef9d0990100c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 13:57:43.200559 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:43.201008 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 13:57:43.201051 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 13:57:43.200983 1968488 retry.go:31] will retry after 5.029805513s: waiting for domain to come up
	I0120 13:57:49.743986 1968767 start.go:364] duration metric: took 12.391103692s to acquireMachinesLock for "kubernetes-upgrade-377526"
	I0120 13:57:49.744086 1968767 start.go:96] Skipping create...Using existing machine configuration
	I0120 13:57:49.744099 1968767 fix.go:54] fixHost starting: 
	I0120 13:57:49.744514 1968767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:57:49.744569 1968767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:57:49.762854 1968767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35235
	I0120 13:57:49.763424 1968767 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:57:49.763952 1968767 main.go:141] libmachine: Using API Version  1
	I0120 13:57:49.763982 1968767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:57:49.764379 1968767 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:57:49.764598 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:57:49.764741 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetState
	I0120 13:57:49.766580 1968767 fix.go:112] recreateIfNeeded on kubernetes-upgrade-377526: state=Running err=<nil>
	W0120 13:57:49.766601 1968767 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 13:57:49.768766 1968767 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-377526" VM ...
	I0120 13:57:48.232666 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:48.233143 1968407 main.go:141] libmachine: (embed-certs-647109) found domain IP: 192.168.50.62
	I0120 13:57:48.233173 1968407 main.go:141] libmachine: (embed-certs-647109) reserving static IP address...
	I0120 13:57:48.233188 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has current primary IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:48.233656 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find host DHCP lease matching {name: "embed-certs-647109", mac: "52:54:00:31:ac:09", ip: "192.168.50.62"} in network mk-embed-certs-647109
	I0120 13:57:48.323926 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | Getting to WaitForSSH function...
	I0120 13:57:48.323961 1968407 main.go:141] libmachine: (embed-certs-647109) reserved static IP address 192.168.50.62 for domain embed-certs-647109
	I0120 13:57:48.323975 1968407 main.go:141] libmachine: (embed-certs-647109) waiting for SSH...
	I0120 13:57:48.327529 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:48.328128 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:48.328158 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:48.328304 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | Using SSH client type: external
	I0120 13:57:48.328327 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa (-rw-------)
	I0120 13:57:48.328352 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 13:57:48.328371 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | About to run SSH command:
	I0120 13:57:48.328388 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | exit 0
	I0120 13:57:48.459301 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | SSH cmd err, output: <nil>: 
	I0120 13:57:48.459603 1968407 main.go:141] libmachine: (embed-certs-647109) KVM machine creation complete
	I0120 13:57:48.459976 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetConfigRaw
	I0120 13:57:48.460625 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 13:57:48.460873 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 13:57:48.461162 1968407 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 13:57:48.461186 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 13:57:48.463080 1968407 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 13:57:48.463096 1968407 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 13:57:48.463116 1968407 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 13:57:48.463122 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 13:57:48.465914 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:48.466269 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:48.466301 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:48.466467 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 13:57:48.466685 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 13:57:48.466841 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 13:57:48.466945 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 13:57:48.467097 1968407 main.go:141] libmachine: Using SSH client type: native
	I0120 13:57:48.467353 1968407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0120 13:57:48.467375 1968407 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 13:57:48.578345 1968407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 13:57:48.578376 1968407 main.go:141] libmachine: Detecting the provisioner...
	I0120 13:57:48.578388 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 13:57:48.581577 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:48.582051 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:48.582102 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:48.582326 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 13:57:48.582547 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 13:57:48.582716 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 13:57:48.582815 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 13:57:48.582941 1968407 main.go:141] libmachine: Using SSH client type: native
	I0120 13:57:48.583166 1968407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0120 13:57:48.583178 1968407 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 13:57:48.695827 1968407 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 13:57:48.695903 1968407 main.go:141] libmachine: found compatible host: buildroot
	I0120 13:57:48.695919 1968407 main.go:141] libmachine: Provisioning with buildroot...
	I0120 13:57:48.695932 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetMachineName
	I0120 13:57:48.696242 1968407 buildroot.go:166] provisioning hostname "embed-certs-647109"
	I0120 13:57:48.696300 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetMachineName
	I0120 13:57:48.696532 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 13:57:48.699450 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:48.699795 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:48.699828 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:48.700046 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 13:57:48.700241 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 13:57:48.700410 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 13:57:48.700503 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 13:57:48.700651 1968407 main.go:141] libmachine: Using SSH client type: native
	I0120 13:57:48.700863 1968407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0120 13:57:48.700882 1968407 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-647109 && echo "embed-certs-647109" | sudo tee /etc/hostname
	I0120 13:57:48.827206 1968407 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-647109
	
	I0120 13:57:48.827249 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 13:57:48.830288 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:48.830601 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:48.830646 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:48.830846 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 13:57:48.831030 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 13:57:48.831239 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 13:57:48.831392 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 13:57:48.831611 1968407 main.go:141] libmachine: Using SSH client type: native
	I0120 13:57:48.831825 1968407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0120 13:57:48.831844 1968407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-647109' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-647109/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-647109' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 13:57:48.957159 1968407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 13:57:48.957199 1968407 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 13:57:48.957249 1968407 buildroot.go:174] setting up certificates
	I0120 13:57:48.957264 1968407 provision.go:84] configureAuth start
	I0120 13:57:48.957295 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetMachineName
	I0120 13:57:48.957623 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetIP
	I0120 13:57:48.960622 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:48.961021 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:48.961057 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:48.961252 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 13:57:48.964114 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:48.964479 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:48.964508 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:48.964631 1968407 provision.go:143] copyHostCerts
	I0120 13:57:48.964687 1968407 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem, removing ...
	I0120 13:57:48.964714 1968407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem
	I0120 13:57:48.964774 1968407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 13:57:48.964952 1968407 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem, removing ...
	I0120 13:57:48.964968 1968407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem
	I0120 13:57:48.965001 1968407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 13:57:48.965110 1968407 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem, removing ...
	I0120 13:57:48.965121 1968407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem
	I0120 13:57:48.965149 1968407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 13:57:48.965233 1968407 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.embed-certs-647109 san=[127.0.0.1 192.168.50.62 embed-certs-647109 localhost minikube]
	I0120 13:57:49.070236 1968407 provision.go:177] copyRemoteCerts
	I0120 13:57:49.070311 1968407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 13:57:49.070338 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 13:57:49.073087 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.073479 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:49.073504 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.073715 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 13:57:49.073913 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 13:57:49.074095 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 13:57:49.074239 1968407 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 13:57:49.166373 1968407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 13:57:49.191822 1968407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0120 13:57:49.215741 1968407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 13:57:49.241089 1968407 provision.go:87] duration metric: took 283.806429ms to configureAuth
	I0120 13:57:49.241130 1968407 buildroot.go:189] setting minikube options for container-runtime
	I0120 13:57:49.241378 1968407 config.go:182] Loaded profile config "embed-certs-647109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 13:57:49.241502 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 13:57:49.244550 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.245018 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:49.245051 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.245199 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 13:57:49.245409 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 13:57:49.245612 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 13:57:49.245778 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 13:57:49.245967 1968407 main.go:141] libmachine: Using SSH client type: native
	I0120 13:57:49.246196 1968407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0120 13:57:49.246215 1968407 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 13:57:49.482918 1968407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 13:57:49.482951 1968407 main.go:141] libmachine: Checking connection to Docker...
	I0120 13:57:49.482963 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetURL
	I0120 13:57:49.484361 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | using libvirt version 6000000
	I0120 13:57:49.486507 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.486978 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:49.487012 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.487206 1968407 main.go:141] libmachine: Docker is up and running!
	I0120 13:57:49.487222 1968407 main.go:141] libmachine: Reticulating splines...
	I0120 13:57:49.487231 1968407 client.go:171] duration metric: took 27.773721923s to LocalClient.Create
	I0120 13:57:49.487260 1968407 start.go:167] duration metric: took 27.773796364s to libmachine.API.Create "embed-certs-647109"
	I0120 13:57:49.487273 1968407 start.go:293] postStartSetup for "embed-certs-647109" (driver="kvm2")
	I0120 13:57:49.487286 1968407 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 13:57:49.487316 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 13:57:49.487597 1968407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 13:57:49.487623 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 13:57:49.490003 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.490373 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:49.490423 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.490549 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 13:57:49.490819 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 13:57:49.491011 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 13:57:49.491190 1968407 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 13:57:49.578134 1968407 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 13:57:49.583137 1968407 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 13:57:49.583171 1968407 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 13:57:49.583256 1968407 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 13:57:49.583358 1968407 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem -> 19276722.pem in /etc/ssl/certs
	I0120 13:57:49.583484 1968407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 13:57:49.594212 1968407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 13:57:49.620219 1968407 start.go:296] duration metric: took 132.926368ms for postStartSetup
	I0120 13:57:49.620290 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetConfigRaw
	I0120 13:57:49.620942 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetIP
	I0120 13:57:49.624240 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.624670 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:49.624703 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.624952 1968407 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/config.json ...
	I0120 13:57:49.625156 1968407 start.go:128] duration metric: took 27.936629088s to createHost
	I0120 13:57:49.625182 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 13:57:49.627706 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.628134 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:49.628164 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.628283 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 13:57:49.628498 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 13:57:49.628667 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 13:57:49.628816 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 13:57:49.628967 1968407 main.go:141] libmachine: Using SSH client type: native
	I0120 13:57:49.629207 1968407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0120 13:57:49.629224 1968407 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 13:57:49.743826 1968407 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381469.729510470
	
	I0120 13:57:49.743850 1968407 fix.go:216] guest clock: 1737381469.729510470
	I0120 13:57:49.743858 1968407 fix.go:229] Guest: 2025-01-20 13:57:49.72951047 +0000 UTC Remote: 2025-01-20 13:57:49.62516974 +0000 UTC m=+43.544395788 (delta=104.34073ms)
	I0120 13:57:49.743895 1968407 fix.go:200] guest clock delta is within tolerance: 104.34073ms
	I0120 13:57:49.743903 1968407 start.go:83] releasing machines lock for "embed-certs-647109", held for 28.055553403s
	I0120 13:57:49.743941 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 13:57:49.744275 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetIP
	I0120 13:57:49.746970 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.747397 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:49.747427 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.747584 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 13:57:49.748131 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 13:57:49.748302 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 13:57:49.748408 1968407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 13:57:49.748473 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 13:57:49.748511 1968407 ssh_runner.go:195] Run: cat /version.json
	I0120 13:57:49.748541 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 13:57:49.751358 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.751618 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.751708 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:49.751735 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.751833 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 13:57:49.751992 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 13:57:49.752079 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:49.752095 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 13:57:49.752102 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:49.752262 1968407 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 13:57:49.752319 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 13:57:49.752461 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 13:57:49.752596 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 13:57:49.752707 1968407 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 13:57:49.859426 1968407 ssh_runner.go:195] Run: systemctl --version
	I0120 13:57:49.865905 1968407 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 13:57:50.031975 1968407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 13:57:50.040175 1968407 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 13:57:50.040268 1968407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 13:57:50.058696 1968407 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 13:57:50.058730 1968407 start.go:495] detecting cgroup driver to use...
	I0120 13:57:50.058831 1968407 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 13:57:50.078282 1968407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 13:57:50.094976 1968407 docker.go:217] disabling cri-docker service (if available) ...
	I0120 13:57:50.095057 1968407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 13:57:50.109781 1968407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 13:57:50.126132 1968407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 13:57:50.251550 1968407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 13:57:50.411874 1968407 docker.go:233] disabling docker service ...
	I0120 13:57:50.411948 1968407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 13:57:50.426846 1968407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 13:57:50.441366 1968407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 13:57:50.584140 1968407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 13:57:50.729815 1968407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 13:57:50.749366 1968407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 13:57:50.776711 1968407 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 13:57:50.776774 1968407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:57:50.788043 1968407 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 13:57:50.788125 1968407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:57:50.801722 1968407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:57:50.813998 1968407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:57:50.827956 1968407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 13:57:50.839423 1968407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:57:50.852196 1968407 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:57:50.870736 1968407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:57:50.881688 1968407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 13:57:50.891785 1968407 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 13:57:50.891848 1968407 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 13:57:50.906804 1968407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 13:57:50.917088 1968407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 13:57:51.036956 1968407 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 13:57:51.136413 1968407 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 13:57:51.136506 1968407 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 13:57:51.142065 1968407 start.go:563] Will wait 60s for crictl version
	I0120 13:57:51.142127 1968407 ssh_runner.go:195] Run: which crictl
	I0120 13:57:51.146330 1968407 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 13:57:51.188693 1968407 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 13:57:51.188778 1968407 ssh_runner.go:195] Run: crio --version
	I0120 13:57:51.218345 1968407 ssh_runner.go:195] Run: crio --version
	I0120 13:57:51.251307 1968407 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 13:57:49.770234 1968767 machine.go:93] provisionDockerMachine start ...
	I0120 13:57:49.770287 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:57:49.770542 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:57:49.773688 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:49.774204 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:57:13 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:57:49.774238 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:49.774419 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:57:49.774623 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:57:49.774800 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:57:49.774964 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:57:49.775165 1968767 main.go:141] libmachine: Using SSH client type: native
	I0120 13:57:49.775418 1968767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0120 13:57:49.775435 1968767 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 13:57:49.883773 1968767 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-377526
	
	I0120 13:57:49.883815 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetMachineName
	I0120 13:57:49.884114 1968767 buildroot.go:166] provisioning hostname "kubernetes-upgrade-377526"
	I0120 13:57:49.884148 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetMachineName
	I0120 13:57:49.884320 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:57:49.887340 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:49.887751 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:57:13 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:57:49.887778 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:49.888024 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:57:49.888253 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:57:49.888414 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:57:49.888571 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:57:49.888742 1968767 main.go:141] libmachine: Using SSH client type: native
	I0120 13:57:49.888944 1968767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0120 13:57:49.888958 1968767 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-377526 && echo "kubernetes-upgrade-377526" | sudo tee /etc/hostname
	I0120 13:57:50.014509 1968767 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-377526
	
	I0120 13:57:50.014545 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:57:50.017753 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:50.018122 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:57:13 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:57:50.018159 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:50.018376 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:57:50.018531 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:57:50.018686 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:57:50.018816 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:57:50.018956 1968767 main.go:141] libmachine: Using SSH client type: native
	I0120 13:57:50.019142 1968767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0120 13:57:50.019158 1968767 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-377526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-377526/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-377526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 13:57:50.138227 1968767 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 13:57:50.138259 1968767 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 13:57:50.138306 1968767 buildroot.go:174] setting up certificates
	I0120 13:57:50.138316 1968767 provision.go:84] configureAuth start
	I0120 13:57:50.138329 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetMachineName
	I0120 13:57:50.138659 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetIP
	I0120 13:57:50.141269 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:50.141665 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:57:13 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:57:50.141697 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:50.141908 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:57:50.144531 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:50.144996 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:57:13 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:57:50.145017 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:50.145198 1968767 provision.go:143] copyHostCerts
	I0120 13:57:50.145266 1968767 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem, removing ...
	I0120 13:57:50.145286 1968767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem
	I0120 13:57:50.145356 1968767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 13:57:50.145469 1968767 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem, removing ...
	I0120 13:57:50.145481 1968767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem
	I0120 13:57:50.145511 1968767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 13:57:50.145588 1968767 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem, removing ...
	I0120 13:57:50.145599 1968767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem
	I0120 13:57:50.145625 1968767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 13:57:50.145688 1968767 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-377526 san=[127.0.0.1 192.168.72.172 kubernetes-upgrade-377526 localhost minikube]
	I0120 13:57:50.364575 1968767 provision.go:177] copyRemoteCerts
	I0120 13:57:50.364638 1968767 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 13:57:50.364668 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:57:50.368113 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:50.368522 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:57:13 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:57:50.368553 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:50.368773 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:57:50.369076 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:57:50.369279 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:57:50.369475 1968767 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526/id_rsa Username:docker}
	I0120 13:57:50.458671 1968767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0120 13:57:50.489388 1968767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 13:57:50.515967 1968767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 13:57:50.544225 1968767 provision.go:87] duration metric: took 405.890917ms to configureAuth
	I0120 13:57:50.544263 1968767 buildroot.go:189] setting minikube options for container-runtime
	I0120 13:57:50.544442 1968767 config.go:182] Loaded profile config "kubernetes-upgrade-377526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 13:57:50.544521 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:57:50.548118 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:50.548486 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:57:13 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:57:50.548525 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:50.548673 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:57:50.548934 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:57:50.549115 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:57:50.549286 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:57:50.549499 1968767 main.go:141] libmachine: Using SSH client type: native
	I0120 13:57:50.549719 1968767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0120 13:57:50.549736 1968767 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 13:57:53.615368 1967169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:57:53.615647 1967169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:57:53.615675 1967169 kubeadm.go:310] 
	I0120 13:57:53.615734 1967169 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 13:57:53.615798 1967169 kubeadm.go:310] 		timed out waiting for the condition
	I0120 13:57:53.615806 1967169 kubeadm.go:310] 
	I0120 13:57:53.615863 1967169 kubeadm.go:310] 	This error is likely caused by:
	I0120 13:57:53.615924 1967169 kubeadm.go:310] 		- The kubelet is not running
	I0120 13:57:53.616117 1967169 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 13:57:53.616131 1967169 kubeadm.go:310] 
	I0120 13:57:53.616269 1967169 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 13:57:53.616312 1967169 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 13:57:53.616354 1967169 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 13:57:53.616361 1967169 kubeadm.go:310] 
	I0120 13:57:53.616503 1967169 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 13:57:53.616646 1967169 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 13:57:53.616656 1967169 kubeadm.go:310] 
	I0120 13:57:53.616795 1967169 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 13:57:53.616921 1967169 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 13:57:53.617040 1967169 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 13:57:53.617137 1967169 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 13:57:53.617187 1967169 kubeadm.go:310] 
	I0120 13:57:53.617325 1967169 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 13:57:53.617445 1967169 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 13:57:53.617556 1967169 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0120 13:57:53.617721 1967169 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-191446] and IPs [192.168.61.215 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-191446] and IPs [192.168.61.215 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0120 13:57:53.617773 1967169 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 13:57:51.252573 1968407 main.go:141] libmachine: (embed-certs-647109) Calling .GetIP
	I0120 13:57:51.255693 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:51.256129 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 14:57:38 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 13:57:51.256161 1968407 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 13:57:51.256421 1968407 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 13:57:51.261165 1968407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 13:57:51.275555 1968407 kubeadm.go:883] updating cluster {Name:embed-certs-647109 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-647109 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 13:57:51.275692 1968407 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 13:57:51.275771 1968407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 13:57:51.311246 1968407 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 13:57:51.311330 1968407 ssh_runner.go:195] Run: which lz4
	I0120 13:57:51.315685 1968407 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 13:57:51.320434 1968407 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 13:57:51.320478 1968407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 13:57:52.815234 1968407 crio.go:462] duration metric: took 1.499596309s to copy over tarball
	I0120 13:57:52.815338 1968407 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 13:57:55.016044 1968407 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.200672122s)
	I0120 13:57:55.016069 1968407 crio.go:469] duration metric: took 2.200796201s to extract the tarball
	I0120 13:57:55.016077 1968407 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 13:57:55.056942 1968407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 13:57:55.107546 1968407 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 13:57:55.107571 1968407 cache_images.go:84] Images are preloaded, skipping loading
	I0120 13:57:55.107580 1968407 kubeadm.go:934] updating node { 192.168.50.62 8443 v1.32.0 crio true true} ...
	I0120 13:57:55.107702 1968407 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-647109 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:embed-certs-647109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 13:57:55.107791 1968407 ssh_runner.go:195] Run: crio config
	I0120 13:57:55.157111 1968407 cni.go:84] Creating CNI manager for ""
	I0120 13:57:55.157141 1968407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 13:57:55.157154 1968407 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 13:57:55.157185 1968407 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.62 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-647109 NodeName:embed-certs-647109 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 13:57:55.157379 1968407 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-647109"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.62"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.62"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 13:57:55.157467 1968407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 13:57:55.169072 1968407 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 13:57:55.169143 1968407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 13:57:55.181574 1968407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0120 13:57:55.199837 1968407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 13:57:55.217798 1968407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0120 13:57:55.235721 1968407 ssh_runner.go:195] Run: grep 192.168.50.62	control-plane.minikube.internal$ /etc/hosts
	I0120 13:57:55.239785 1968407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 13:57:55.253356 1968407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 13:57:55.382223 1968407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 13:57:55.400781 1968407 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109 for IP: 192.168.50.62
	I0120 13:57:55.400806 1968407 certs.go:194] generating shared ca certs ...
	I0120 13:57:55.400824 1968407 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:57:55.401004 1968407 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 13:57:55.401056 1968407 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 13:57:55.401080 1968407 certs.go:256] generating profile certs ...
	I0120 13:57:55.401157 1968407 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/client.key
	I0120 13:57:55.401196 1968407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/client.crt with IP's: []
	I0120 13:57:55.495569 1968407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/client.crt ...
	I0120 13:57:55.495602 1968407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/client.crt: {Name:mk785083ec59a5051e9fafacd03602ed4e270bb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:57:55.495821 1968407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/client.key ...
	I0120 13:57:55.495839 1968407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/client.key: {Name:mk21f9927eb3b8633b6efd815f9eb9ef7588aeba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:57:55.495953 1968407 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/apiserver.key.34f51781
	I0120 13:57:55.495979 1968407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/apiserver.crt.34f51781 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.62]
	I0120 13:57:55.589648 1968407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/apiserver.crt.34f51781 ...
	I0120 13:57:55.589681 1968407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/apiserver.crt.34f51781: {Name:mk5e9011d06bc8779bede7f25c1d3c2a66c39df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:57:55.589900 1968407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/apiserver.key.34f51781 ...
	I0120 13:57:55.589922 1968407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/apiserver.key.34f51781: {Name:mk6b6f3b8df66015a0ed278904e23ef1e7ffcbed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:57:55.590028 1968407 certs.go:381] copying /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/apiserver.crt.34f51781 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/apiserver.crt
	I0120 13:57:55.590167 1968407 certs.go:385] copying /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/apiserver.key.34f51781 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/apiserver.key
	I0120 13:57:55.590274 1968407 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/proxy-client.key
	I0120 13:57:55.590302 1968407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/proxy-client.crt with IP's: []
	I0120 13:57:55.768743 1968407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/proxy-client.crt ...
	I0120 13:57:55.768775 1968407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/proxy-client.crt: {Name:mka229ada1e7b0f74dea68bcd1ce4dbae4745db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:57:55.768967 1968407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/proxy-client.key ...
	I0120 13:57:55.768998 1968407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/proxy-client.key: {Name:mkd1939e64c8636370329b2915b6c0251f5ca3c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:57:55.769238 1968407 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem (1338 bytes)
	W0120 13:57:55.769289 1968407 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672_empty.pem, impossibly tiny 0 bytes
	I0120 13:57:55.769306 1968407 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 13:57:55.769371 1968407 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 13:57:55.769429 1968407 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 13:57:55.769461 1968407 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 13:57:55.769517 1968407 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 13:57:55.770208 1968407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 13:57:55.802531 1968407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 13:57:55.831910 1968407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 13:57:55.858897 1968407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 13:57:55.884658 1968407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0120 13:57:55.913869 1968407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 13:57:55.944917 1968407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 13:57:55.974387 1968407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 13:57:56.006246 1968407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem --> /usr/share/ca-certificates/1927672.pem (1338 bytes)
	I0120 13:57:56.040747 1968407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /usr/share/ca-certificates/19276722.pem (1708 bytes)
	I0120 13:57:56.068433 1968407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 13:57:56.096802 1968407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 13:57:56.117108 1968407 ssh_runner.go:195] Run: openssl version
	I0120 13:57:56.125379 1968407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 13:57:55.919261 1967169 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.301449355s)
	I0120 13:57:55.919390 1967169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:57:55.938129 1967169 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 13:57:55.952121 1967169 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 13:57:55.952146 1967169 kubeadm.go:157] found existing configuration files:
	
	I0120 13:57:55.952201 1967169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 13:57:55.964375 1967169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 13:57:55.964438 1967169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 13:57:55.975340 1967169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 13:57:55.985132 1967169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 13:57:55.985209 1967169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 13:57:55.995645 1967169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 13:57:56.006414 1967169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 13:57:56.006493 1967169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 13:57:56.021114 1967169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 13:57:56.035115 1967169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 13:57:56.035196 1967169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 13:57:56.046387 1967169 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 13:57:56.120924 1967169 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 13:57:56.121016 1967169 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 13:57:56.296535 1967169 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 13:57:56.296687 1967169 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 13:57:56.296833 1967169 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 13:57:56.528646 1967169 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 13:57:56.718656 1967169 out.go:235]   - Generating certificates and keys ...
	I0120 13:57:56.718810 1967169 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 13:57:56.718903 1967169 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 13:57:56.719022 1967169 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 13:57:56.719098 1967169 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 13:57:56.719229 1967169 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 13:57:56.719316 1967169 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 13:57:56.719402 1967169 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 13:57:56.719477 1967169 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 13:57:56.719580 1967169 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 13:57:56.719701 1967169 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 13:57:56.719760 1967169 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 13:57:56.719849 1967169 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 13:57:56.719939 1967169 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 13:57:56.727755 1967169 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 13:57:56.942623 1967169 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 13:57:57.179596 1967169 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 13:57:57.197821 1967169 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 13:57:57.199025 1967169 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 13:57:57.199103 1967169 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 13:57:57.362184 1967169 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 13:57:57.416102 1967169 out.go:235]   - Booting up control plane ...
	I0120 13:57:57.416316 1967169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 13:57:57.416452 1967169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 13:57:57.416555 1967169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 13:57:57.416662 1967169 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 13:57:57.416882 1967169 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 13:57:56.138670 1968407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:57:56.144971 1968407 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:57:56.145032 1968407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:57:56.152451 1968407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 13:57:56.166748 1968407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1927672.pem && ln -fs /usr/share/ca-certificates/1927672.pem /etc/ssl/certs/1927672.pem"
	I0120 13:57:56.183709 1968407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1927672.pem
	I0120 13:57:56.188941 1968407 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:58 /usr/share/ca-certificates/1927672.pem
	I0120 13:57:56.189032 1968407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1927672.pem
	I0120 13:57:56.195654 1968407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1927672.pem /etc/ssl/certs/51391683.0"
	I0120 13:57:56.207528 1968407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19276722.pem && ln -fs /usr/share/ca-certificates/19276722.pem /etc/ssl/certs/19276722.pem"
	I0120 13:57:56.220098 1968407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19276722.pem
	I0120 13:57:56.225125 1968407 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:58 /usr/share/ca-certificates/19276722.pem
	I0120 13:57:56.225203 1968407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19276722.pem
	I0120 13:57:56.231493 1968407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19276722.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 13:57:56.249390 1968407 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 13:57:56.259657 1968407 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 13:57:56.259741 1968407 kubeadm.go:392] StartCluster: {Name:embed-certs-647109 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-647109 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:57:56.259863 1968407 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 13:57:56.259938 1968407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 13:57:56.328134 1968407 cri.go:89] found id: ""
	I0120 13:57:56.328231 1968407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 13:57:56.341405 1968407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 13:57:56.352907 1968407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 13:57:56.365155 1968407 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 13:57:56.365177 1968407 kubeadm.go:157] found existing configuration files:
	
	I0120 13:57:56.365226 1968407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 13:57:56.376257 1968407 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 13:57:56.376330 1968407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 13:57:56.387151 1968407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 13:57:56.398094 1968407 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 13:57:56.398180 1968407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 13:57:56.409165 1968407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 13:57:56.420317 1968407 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 13:57:56.420391 1968407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 13:57:56.431596 1968407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 13:57:56.443660 1968407 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 13:57:56.443736 1968407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 13:57:56.458196 1968407 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 13:57:56.719545 1968407 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 13:57:58.458680 1968767 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 13:57:58.458721 1968767 machine.go:96] duration metric: took 8.688467086s to provisionDockerMachine
	I0120 13:57:58.458736 1968767 start.go:293] postStartSetup for "kubernetes-upgrade-377526" (driver="kvm2")
	I0120 13:57:58.458750 1968767 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 13:57:58.458785 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:57:58.459132 1968767 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 13:57:58.459180 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:57:58.462443 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:58.462825 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:57:13 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:57:58.462855 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:58.463031 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:57:58.463262 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:57:58.463428 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:57:58.463605 1968767 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526/id_rsa Username:docker}
	I0120 13:57:58.550301 1968767 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 13:57:58.555218 1968767 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 13:57:58.555246 1968767 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 13:57:58.555313 1968767 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 13:57:58.555384 1968767 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem -> 19276722.pem in /etc/ssl/certs
	I0120 13:57:58.555471 1968767 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 13:57:58.565219 1968767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 13:57:58.590930 1968767 start.go:296] duration metric: took 132.175369ms for postStartSetup
	I0120 13:57:58.590983 1968767 fix.go:56] duration metric: took 8.846883713s for fixHost
	I0120 13:57:58.591014 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:57:58.594098 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:58.594421 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:57:13 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:57:58.594453 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:58.594641 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:57:58.594869 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:57:58.595051 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:57:58.595202 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:57:58.595370 1968767 main.go:141] libmachine: Using SSH client type: native
	I0120 13:57:58.595602 1968767 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0120 13:57:58.595618 1968767 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 13:57:58.707783 1968767 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381478.695770090
	
	I0120 13:57:58.707811 1968767 fix.go:216] guest clock: 1737381478.695770090
	I0120 13:57:58.707818 1968767 fix.go:229] Guest: 2025-01-20 13:57:58.69577009 +0000 UTC Remote: 2025-01-20 13:57:58.590989576 +0000 UTC m=+21.392717443 (delta=104.780514ms)
	I0120 13:57:58.707838 1968767 fix.go:200] guest clock delta is within tolerance: 104.780514ms
	I0120 13:57:58.707843 1968767 start.go:83] releasing machines lock for "kubernetes-upgrade-377526", held for 8.963807873s
	I0120 13:57:58.707870 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:57:58.708159 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetIP
	I0120 13:57:58.711112 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:58.711560 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:57:13 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:57:58.711585 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:58.711895 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:57:58.712444 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:57:58.712619 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .DriverName
	I0120 13:57:58.712717 1968767 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 13:57:58.712785 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:57:58.712861 1968767 ssh_runner.go:195] Run: cat /version.json
	I0120 13:57:58.712891 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHHostname
	I0120 13:57:58.715626 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:58.715865 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:58.716045 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:57:13 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:57:58.716081 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:58.716156 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:57:13 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:57:58.716190 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:57:58.716189 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:57:58.716414 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHPort
	I0120 13:57:58.716417 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:57:58.716625 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:57:58.716629 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHKeyPath
	I0120 13:57:58.716806 1968767 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526/id_rsa Username:docker}
	I0120 13:57:58.716820 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetSSHUsername
	I0120 13:57:58.716984 1968767 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kubernetes-upgrade-377526/id_rsa Username:docker}
	I0120 13:57:58.821692 1968767 ssh_runner.go:195] Run: systemctl --version
	I0120 13:57:58.828857 1968767 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 13:57:58.986734 1968767 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 13:57:58.993334 1968767 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 13:57:58.993420 1968767 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 13:57:59.003263 1968767 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0120 13:57:59.003296 1968767 start.go:495] detecting cgroup driver to use...
	I0120 13:57:59.003378 1968767 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 13:57:59.020973 1968767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 13:57:59.036008 1968767 docker.go:217] disabling cri-docker service (if available) ...
	I0120 13:57:59.036084 1968767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 13:57:59.050449 1968767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 13:57:59.070796 1968767 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 13:57:59.291629 1968767 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 13:57:59.522783 1968767 docker.go:233] disabling docker service ...
	I0120 13:57:59.522897 1968767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 13:57:59.625049 1968767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 13:57:59.706728 1968767 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 13:58:00.038525 1968767 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 13:58:00.326421 1968767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 13:58:00.376327 1968767 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 13:58:00.536430 1968767 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 13:58:00.536515 1968767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:58:00.608476 1968767 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 13:58:00.608568 1968767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:58:00.695978 1968767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:58:00.769493 1968767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:58:00.828576 1968767 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 13:58:00.854298 1968767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:58:00.870309 1968767 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:58:00.886669 1968767 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:58:00.901276 1968767 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 13:58:00.919996 1968767 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 13:58:00.940916 1968767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 13:58:01.180183 1968767 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 13:58:01.906430 1968767 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 13:58:01.906547 1968767 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 13:58:01.912073 1968767 start.go:563] Will wait 60s for crictl version
	I0120 13:58:01.912146 1968767 ssh_runner.go:195] Run: which crictl
	I0120 13:58:01.916415 1968767 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 13:58:01.952483 1968767 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 13:58:01.952578 1968767 ssh_runner.go:195] Run: crio --version
	I0120 13:58:01.982864 1968767 ssh_runner.go:195] Run: crio --version
	I0120 13:58:02.024831 1968767 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 13:58:02.026172 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) Calling .GetIP
	I0120 13:58:02.029107 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:58:02.029404 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:ec:a6", ip: ""} in network mk-kubernetes-upgrade-377526: {Iface:virbr4 ExpiryTime:2025-01-20 14:57:13 +0000 UTC Type:0 Mac:52:54:00:d6:ec:a6 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:kubernetes-upgrade-377526 Clientid:01:52:54:00:d6:ec:a6}
	I0120 13:58:02.029423 1968767 main.go:141] libmachine: (kubernetes-upgrade-377526) DBG | domain kubernetes-upgrade-377526 has defined IP address 192.168.72.172 and MAC address 52:54:00:d6:ec:a6 in network mk-kubernetes-upgrade-377526
	I0120 13:58:02.029740 1968767 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 13:58:02.034556 1968767 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-377526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kubernetes-upgrade-377526 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.172 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 13:58:02.034732 1968767 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 13:58:02.034799 1968767 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 13:58:02.080843 1968767 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 13:58:02.080868 1968767 crio.go:433] Images already preloaded, skipping extraction
	I0120 13:58:02.080920 1968767 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 13:58:02.120490 1968767 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 13:58:02.120519 1968767 cache_images.go:84] Images are preloaded, skipping loading
	I0120 13:58:02.120528 1968767 kubeadm.go:934] updating node { 192.168.72.172 8443 v1.32.0 crio true true} ...
	I0120 13:58:02.120667 1968767 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-377526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:kubernetes-upgrade-377526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 13:58:02.120743 1968767 ssh_runner.go:195] Run: crio config
	I0120 13:58:02.168257 1968767 cni.go:84] Creating CNI manager for ""
	I0120 13:58:02.168287 1968767 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 13:58:02.168301 1968767 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 13:58:02.168329 1968767 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.172 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-377526 NodeName:kubernetes-upgrade-377526 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 13:58:02.168517 1968767 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-377526"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.172"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.172"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 13:58:02.168602 1968767 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 13:58:02.189973 1968767 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 13:58:02.190055 1968767 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 13:58:02.201172 1968767 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0120 13:58:02.222577 1968767 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 13:58:07.115006 1968407 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 13:58:07.115083 1968407 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 13:58:07.115194 1968407 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 13:58:07.115335 1968407 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 13:58:07.115463 1968407 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 13:58:07.115570 1968407 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 13:58:07.117328 1968407 out.go:235]   - Generating certificates and keys ...
	I0120 13:58:07.117427 1968407 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 13:58:07.117518 1968407 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 13:58:07.117616 1968407 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 13:58:07.117726 1968407 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 13:58:07.117834 1968407 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 13:58:07.117911 1968407 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 13:58:07.117983 1968407 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 13:58:07.118139 1968407 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-647109 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	I0120 13:58:07.118202 1968407 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 13:58:07.118330 1968407 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-647109 localhost] and IPs [192.168.50.62 127.0.0.1 ::1]
	I0120 13:58:07.118391 1968407 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 13:58:07.118459 1968407 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 13:58:07.118529 1968407 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 13:58:07.118636 1968407 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 13:58:07.118718 1968407 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 13:58:07.118798 1968407 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 13:58:07.118867 1968407 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 13:58:07.118958 1968407 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 13:58:07.119029 1968407 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 13:58:07.119150 1968407 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 13:58:07.119226 1968407 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 13:58:07.121619 1968407 out.go:235]   - Booting up control plane ...
	I0120 13:58:07.121736 1968407 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 13:58:07.121834 1968407 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 13:58:07.121917 1968407 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 13:58:07.122169 1968407 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 13:58:07.122296 1968407 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 13:58:07.122363 1968407 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 13:58:07.122501 1968407 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 13:58:07.122652 1968407 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 13:58:07.122729 1968407 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.713857ms
	I0120 13:58:07.122814 1968407 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 13:58:07.122888 1968407 kubeadm.go:310] [api-check] The API server is healthy after 5.502140863s
	I0120 13:58:07.123038 1968407 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 13:58:07.123228 1968407 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 13:58:07.123319 1968407 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 13:58:07.123609 1968407 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-647109 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 13:58:07.123772 1968407 kubeadm.go:310] [bootstrap-token] Using token: rtb2iq.m3ux7u1ymfk5rrou
	I0120 13:58:07.125139 1968407 out.go:235]   - Configuring RBAC rules ...
	I0120 13:58:07.125265 1968407 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 13:58:07.125362 1968407 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 13:58:07.125541 1968407 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 13:58:07.125680 1968407 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 13:58:07.125810 1968407 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 13:58:07.125917 1968407 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 13:58:07.126069 1968407 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 13:58:07.126136 1968407 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 13:58:07.126207 1968407 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 13:58:07.126221 1968407 kubeadm.go:310] 
	I0120 13:58:07.126300 1968407 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 13:58:07.126313 1968407 kubeadm.go:310] 
	I0120 13:58:07.126421 1968407 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 13:58:07.126431 1968407 kubeadm.go:310] 
	I0120 13:58:07.126467 1968407 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 13:58:07.126548 1968407 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 13:58:07.126646 1968407 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 13:58:07.126655 1968407 kubeadm.go:310] 
	I0120 13:58:07.126705 1968407 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 13:58:07.126711 1968407 kubeadm.go:310] 
	I0120 13:58:07.126750 1968407 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 13:58:07.126756 1968407 kubeadm.go:310] 
	I0120 13:58:07.126799 1968407 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 13:58:07.126862 1968407 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 13:58:07.126949 1968407 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 13:58:07.126961 1968407 kubeadm.go:310] 
	I0120 13:58:07.127049 1968407 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 13:58:07.127140 1968407 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 13:58:07.127150 1968407 kubeadm.go:310] 
	I0120 13:58:07.127253 1968407 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rtb2iq.m3ux7u1ymfk5rrou \
	I0120 13:58:07.127341 1968407 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 \
	I0120 13:58:07.127361 1968407 kubeadm.go:310] 	--control-plane 
	I0120 13:58:07.127367 1968407 kubeadm.go:310] 
	I0120 13:58:07.127444 1968407 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 13:58:07.127450 1968407 kubeadm.go:310] 
	I0120 13:58:07.127521 1968407 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rtb2iq.m3ux7u1ymfk5rrou \
	I0120 13:58:07.127621 1968407 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 
	I0120 13:58:07.127633 1968407 cni.go:84] Creating CNI manager for ""
	I0120 13:58:07.127639 1968407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 13:58:07.129237 1968407 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 13:58:02.244589 1968767 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0120 13:58:02.265523 1968767 ssh_runner.go:195] Run: grep 192.168.72.172	control-plane.minikube.internal$ /etc/hosts
	I0120 13:58:02.270926 1968767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 13:58:02.490190 1968767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 13:58:02.519484 1968767 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526 for IP: 192.168.72.172
	I0120 13:58:02.519517 1968767 certs.go:194] generating shared ca certs ...
	I0120 13:58:02.519549 1968767 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:58:02.519773 1968767 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 13:58:02.519839 1968767 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 13:58:02.519854 1968767 certs.go:256] generating profile certs ...
	I0120 13:58:02.519983 1968767 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/client.key
	I0120 13:58:02.520052 1968767 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/apiserver.key.f286f221
	I0120 13:58:02.520115 1968767 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/proxy-client.key
	I0120 13:58:02.520277 1968767 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem (1338 bytes)
	W0120 13:58:02.520338 1968767 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672_empty.pem, impossibly tiny 0 bytes
	I0120 13:58:02.520353 1968767 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 13:58:02.520388 1968767 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 13:58:02.520425 1968767 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 13:58:02.520460 1968767 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 13:58:02.520520 1968767 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 13:58:02.521421 1968767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 13:58:02.600560 1968767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 13:58:02.809236 1968767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 13:58:02.933605 1968767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 13:58:03.056615 1968767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0120 13:58:03.152313 1968767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 13:58:03.193882 1968767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 13:58:03.235282 1968767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kubernetes-upgrade-377526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 13:58:03.277406 1968767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem --> /usr/share/ca-certificates/1927672.pem (1338 bytes)
	I0120 13:58:03.320537 1968767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /usr/share/ca-certificates/19276722.pem (1708 bytes)
	I0120 13:58:03.385085 1968767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 13:58:03.444206 1968767 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 13:58:03.499569 1968767 ssh_runner.go:195] Run: openssl version
	I0120 13:58:03.507701 1968767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19276722.pem && ln -fs /usr/share/ca-certificates/19276722.pem /etc/ssl/certs/19276722.pem"
	I0120 13:58:03.521813 1968767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19276722.pem
	I0120 13:58:03.531444 1968767 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:58 /usr/share/ca-certificates/19276722.pem
	I0120 13:58:03.531528 1968767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19276722.pem
	I0120 13:58:03.547848 1968767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19276722.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 13:58:03.573939 1968767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 13:58:03.635168 1968767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:58:03.648284 1968767 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:58:03.648384 1968767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:58:03.660574 1968767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 13:58:03.672884 1968767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1927672.pem && ln -fs /usr/share/ca-certificates/1927672.pem /etc/ssl/certs/1927672.pem"
	I0120 13:58:03.699132 1968767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1927672.pem
	I0120 13:58:03.704589 1968767 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:58 /usr/share/ca-certificates/1927672.pem
	I0120 13:58:03.704669 1968767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1927672.pem
	I0120 13:58:03.711533 1968767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1927672.pem /etc/ssl/certs/51391683.0"
	I0120 13:58:03.725625 1968767 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 13:58:03.731094 1968767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 13:58:03.739424 1968767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 13:58:03.749200 1968767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 13:58:03.756260 1968767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 13:58:03.765159 1968767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 13:58:03.773553 1968767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 13:58:03.780282 1968767 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-377526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kubernetes-upgrade-377526 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.172 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:58:03.780393 1968767 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 13:58:03.780497 1968767 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 13:58:03.827782 1968767 cri.go:89] found id: "3d96eff7014af6865ac634614d0630d1150a8544814326efa1e6a21fca5ca117"
	I0120 13:58:03.827815 1968767 cri.go:89] found id: "346e8b9ad640107bfd8f2f7218d71b95d44a85ec62fe674012e96604836705ba"
	I0120 13:58:03.827821 1968767 cri.go:89] found id: "1b3ddb2aaab6a776208f2aebd0d39c7b952d6e24e544111821bc393a7e4dd621"
	I0120 13:58:03.827827 1968767 cri.go:89] found id: "bfc286821f83d468c96257f6af2660b6bd0c26e2b8665dfba8fbbe9e58290b98"
	I0120 13:58:03.827832 1968767 cri.go:89] found id: "aef5290e5faafded9e15145518208ab5de36f6fa79d2208f774201c4539bf0b8"
	I0120 13:58:03.827837 1968767 cri.go:89] found id: "bc2efa8a60eae84fae6396e07543675214c119e2344c615040d657314e4cc90a"
	I0120 13:58:03.827840 1968767 cri.go:89] found id: "01cbae12bda7d53c5abb4c64d864df75f56db38b5d0fbbadbf6906f8a19f5c27"
	I0120 13:58:03.827844 1968767 cri.go:89] found id: "5101e2a7f569b85c00e7567309aace157a655d850d783251ec86b5461b492c14"
	I0120 13:58:03.827847 1968767 cri.go:89] found id: "a796d9937bf81f4d8468161c258d63de7e91702783febcb9e55319453d542013"
	I0120 13:58:03.827855 1968767 cri.go:89] found id: ""
	I0120 13:58:03.827913 1968767 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-377526 -n kubernetes-upgrade-377526
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-377526 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-377526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-377526
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-377526: (1.14686071s)
--- FAIL: TestKubernetesUpgrade (368.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (126.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 profile list: signal: killed (2m6.355079002s)

                                                
                                                
** stderr ** 
	E0120 13:52:25.830805 1964734 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-926915" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	E0120 13:52:25.900401 1964734 status.go:393] failed to get driver ip: parsing IP: 
	E0120 13:52:25.900439 1964734 status.go:179] status error: parsing IP: 
	E0120 13:52:25.900448 1964734 profile_list.go:118] error getting statuses: parsing IP: 

                                                
                                                
** /stderr **
no_kubernetes_test.go:171: Profile list failed : "out/minikube-linux-amd64 profile list" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-926915 -n NoKubernetes-926915
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-926915 -n NoKubernetes-926915: exit status 6 (243.120882ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 13:54:32.123121 1966378 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-926915" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-926915" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/ProfileList (126.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (273.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-191446 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-191446 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m32.945037558s)

                                                
                                                
-- stdout --
	* [old-k8s-version-191446] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-191446" primary control-plane node in "old-k8s-version-191446" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:55:20.170444 1967169 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:55:20.170590 1967169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:55:20.170599 1967169 out.go:358] Setting ErrFile to fd 2...
	I0120 13:55:20.170626 1967169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:55:20.170881 1967169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 13:55:20.171692 1967169 out.go:352] Setting JSON to false
	I0120 13:55:20.173148 1967169 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":20266,"bootTime":1737361054,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 13:55:20.173320 1967169 start.go:139] virtualization: kvm guest
	I0120 13:55:20.175687 1967169 out.go:177] * [old-k8s-version-191446] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 13:55:20.177755 1967169 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 13:55:20.177758 1967169 notify.go:220] Checking for updates...
	I0120 13:55:20.179219 1967169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 13:55:20.180618 1967169 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 13:55:20.182066 1967169 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 13:55:20.183534 1967169 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 13:55:20.184994 1967169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 13:55:20.186823 1967169 config.go:182] Loaded profile config "cert-expiration-038404": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 13:55:20.186946 1967169 config.go:182] Loaded profile config "kubernetes-upgrade-377526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 13:55:20.187086 1967169 config.go:182] Loaded profile config "stopped-upgrade-795137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0120 13:55:20.187203 1967169 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 13:55:20.231223 1967169 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 13:55:20.232567 1967169 start.go:297] selected driver: kvm2
	I0120 13:55:20.232590 1967169 start.go:901] validating driver "kvm2" against <nil>
	I0120 13:55:20.232605 1967169 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 13:55:20.233652 1967169 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:55:20.233743 1967169 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-1920423/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 13:55:20.250741 1967169 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 13:55:20.250809 1967169 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 13:55:20.251135 1967169 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 13:55:20.251181 1967169 cni.go:84] Creating CNI manager for ""
	I0120 13:55:20.251235 1967169 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 13:55:20.251249 1967169 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 13:55:20.251336 1967169 start.go:340] cluster config:
	{Name:old-k8s-version-191446 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:55:20.251476 1967169 iso.go:125] acquiring lock: {Name:mk18c81b0efa5cc8efe9d47dc52684752f206ae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:55:20.253683 1967169 out.go:177] * Starting "old-k8s-version-191446" primary control-plane node in "old-k8s-version-191446" cluster
	I0120 13:55:20.254997 1967169 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 13:55:20.255046 1967169 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0120 13:55:20.255056 1967169 cache.go:56] Caching tarball of preloaded images
	I0120 13:55:20.255161 1967169 preload.go:172] Found /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 13:55:20.255177 1967169 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0120 13:55:20.255278 1967169 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/config.json ...
	I0120 13:55:20.255299 1967169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/config.json: {Name:mka616f378b5a696b8d19b34cf750b7a2cf6f047 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:55:20.255444 1967169 start.go:360] acquireMachinesLock for old-k8s-version-191446: {Name:mkbf148c6fec0c722eed081be3cef9d0990100c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 13:55:21.283812 1967169 start.go:364] duration metric: took 1.02832496s to acquireMachinesLock for "old-k8s-version-191446"
	I0120 13:55:21.283900 1967169 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-191446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-191446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 13:55:21.284050 1967169 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 13:55:21.286047 1967169 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0120 13:55:21.286266 1967169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:55:21.286350 1967169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:55:21.305587 1967169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33005
	I0120 13:55:21.306184 1967169 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:55:21.306907 1967169 main.go:141] libmachine: Using API Version  1
	I0120 13:55:21.306960 1967169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:55:21.307371 1967169 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:55:21.307616 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetMachineName
	I0120 13:55:21.307820 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 13:55:21.308026 1967169 start.go:159] libmachine.API.Create for "old-k8s-version-191446" (driver="kvm2")
	I0120 13:55:21.308065 1967169 client.go:168] LocalClient.Create starting
	I0120 13:55:21.308111 1967169 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem
	I0120 13:55:21.308161 1967169 main.go:141] libmachine: Decoding PEM data...
	I0120 13:55:21.308189 1967169 main.go:141] libmachine: Parsing certificate...
	I0120 13:55:21.308264 1967169 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem
	I0120 13:55:21.308291 1967169 main.go:141] libmachine: Decoding PEM data...
	I0120 13:55:21.308308 1967169 main.go:141] libmachine: Parsing certificate...
	I0120 13:55:21.308332 1967169 main.go:141] libmachine: Running pre-create checks...
	I0120 13:55:21.308356 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .PreCreateCheck
	I0120 13:55:21.308848 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetConfigRaw
	I0120 13:55:21.309325 1967169 main.go:141] libmachine: Creating machine...
	I0120 13:55:21.309348 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .Create
	I0120 13:55:21.309514 1967169 main.go:141] libmachine: (old-k8s-version-191446) creating KVM machine...
	I0120 13:55:21.309538 1967169 main.go:141] libmachine: (old-k8s-version-191446) creating network...
	I0120 13:55:21.311187 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found existing default KVM network
	I0120 13:55:21.314092 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:21.313860 1967192 network.go:209] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0120 13:55:21.315135 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:21.315005 1967192 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fd:09:3b} reservation:<nil>}
	I0120 13:55:21.316332 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:21.316228 1967192 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000285520}
	I0120 13:55:21.316367 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | created network xml: 
	I0120 13:55:21.316377 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | <network>
	I0120 13:55:21.316393 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG |   <name>mk-old-k8s-version-191446</name>
	I0120 13:55:21.316403 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG |   <dns enable='no'/>
	I0120 13:55:21.316410 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG |   
	I0120 13:55:21.316424 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0120 13:55:21.316449 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG |     <dhcp>
	I0120 13:55:21.316463 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0120 13:55:21.316469 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG |     </dhcp>
	I0120 13:55:21.316515 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG |   </ip>
	I0120 13:55:21.316536 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG |   
	I0120 13:55:21.316543 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | </network>
	I0120 13:55:21.316547 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | 
	I0120 13:55:21.322242 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | trying to create private KVM network mk-old-k8s-version-191446 192.168.61.0/24...
	I0120 13:55:21.403400 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | private KVM network mk-old-k8s-version-191446 192.168.61.0/24 created
	I0120 13:55:21.403442 1967169 main.go:141] libmachine: (old-k8s-version-191446) setting up store path in /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446 ...
	I0120 13:55:21.403467 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:21.403361 1967192 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 13:55:21.403481 1967169 main.go:141] libmachine: (old-k8s-version-191446) building disk image from file:///home/jenkins/minikube-integration/20242-1920423/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 13:55:21.403506 1967169 main.go:141] libmachine: (old-k8s-version-191446) Downloading /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20242-1920423/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 13:55:21.699311 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:21.699132 1967192 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa...
	I0120 13:55:21.819839 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:21.819670 1967192 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/old-k8s-version-191446.rawdisk...
	I0120 13:55:21.819874 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | Writing magic tar header
	I0120 13:55:21.819929 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | Writing SSH key tar header
	I0120 13:55:21.819967 1967169 main.go:141] libmachine: (old-k8s-version-191446) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446 (perms=drwx------)
	I0120 13:55:21.819987 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:21.819799 1967192 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446 ...
	I0120 13:55:21.820008 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446
	I0120 13:55:21.820019 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines
	I0120 13:55:21.820027 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 13:55:21.820044 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423
	I0120 13:55:21.820073 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 13:55:21.820092 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | checking permissions on dir: /home/jenkins
	I0120 13:55:21.820105 1967169 main.go:141] libmachine: (old-k8s-version-191446) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423/.minikube/machines (perms=drwxr-xr-x)
	I0120 13:55:21.820123 1967169 main.go:141] libmachine: (old-k8s-version-191446) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423/.minikube (perms=drwxr-xr-x)
	I0120 13:55:21.820137 1967169 main.go:141] libmachine: (old-k8s-version-191446) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423 (perms=drwxrwxr-x)
	I0120 13:55:21.820152 1967169 main.go:141] libmachine: (old-k8s-version-191446) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 13:55:21.820181 1967169 main.go:141] libmachine: (old-k8s-version-191446) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 13:55:21.820195 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | checking permissions on dir: /home
	I0120 13:55:21.820209 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | skipping /home - not owner
	I0120 13:55:21.820221 1967169 main.go:141] libmachine: (old-k8s-version-191446) creating domain...
	I0120 13:55:21.821513 1967169 main.go:141] libmachine: (old-k8s-version-191446) define libvirt domain using xml: 
	I0120 13:55:21.821574 1967169 main.go:141] libmachine: (old-k8s-version-191446) <domain type='kvm'>
	I0120 13:55:21.821584 1967169 main.go:141] libmachine: (old-k8s-version-191446)   <name>old-k8s-version-191446</name>
	I0120 13:55:21.821591 1967169 main.go:141] libmachine: (old-k8s-version-191446)   <memory unit='MiB'>2200</memory>
	I0120 13:55:21.821599 1967169 main.go:141] libmachine: (old-k8s-version-191446)   <vcpu>2</vcpu>
	I0120 13:55:21.821606 1967169 main.go:141] libmachine: (old-k8s-version-191446)   <features>
	I0120 13:55:21.821621 1967169 main.go:141] libmachine: (old-k8s-version-191446)     <acpi/>
	I0120 13:55:21.821632 1967169 main.go:141] libmachine: (old-k8s-version-191446)     <apic/>
	I0120 13:55:21.821639 1967169 main.go:141] libmachine: (old-k8s-version-191446)     <pae/>
	I0120 13:55:21.821648 1967169 main.go:141] libmachine: (old-k8s-version-191446)     
	I0120 13:55:21.821657 1967169 main.go:141] libmachine: (old-k8s-version-191446)   </features>
	I0120 13:55:21.821670 1967169 main.go:141] libmachine: (old-k8s-version-191446)   <cpu mode='host-passthrough'>
	I0120 13:55:21.821680 1967169 main.go:141] libmachine: (old-k8s-version-191446)   
	I0120 13:55:21.821686 1967169 main.go:141] libmachine: (old-k8s-version-191446)   </cpu>
	I0120 13:55:21.821694 1967169 main.go:141] libmachine: (old-k8s-version-191446)   <os>
	I0120 13:55:21.821701 1967169 main.go:141] libmachine: (old-k8s-version-191446)     <type>hvm</type>
	I0120 13:55:21.821713 1967169 main.go:141] libmachine: (old-k8s-version-191446)     <boot dev='cdrom'/>
	I0120 13:55:21.821722 1967169 main.go:141] libmachine: (old-k8s-version-191446)     <boot dev='hd'/>
	I0120 13:55:21.821730 1967169 main.go:141] libmachine: (old-k8s-version-191446)     <bootmenu enable='no'/>
	I0120 13:55:21.821738 1967169 main.go:141] libmachine: (old-k8s-version-191446)   </os>
	I0120 13:55:21.821746 1967169 main.go:141] libmachine: (old-k8s-version-191446)   <devices>
	I0120 13:55:21.821756 1967169 main.go:141] libmachine: (old-k8s-version-191446)     <disk type='file' device='cdrom'>
	I0120 13:55:21.821780 1967169 main.go:141] libmachine: (old-k8s-version-191446)       <source file='/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/boot2docker.iso'/>
	I0120 13:55:21.821801 1967169 main.go:141] libmachine: (old-k8s-version-191446)       <target dev='hdc' bus='scsi'/>
	I0120 13:55:21.821809 1967169 main.go:141] libmachine: (old-k8s-version-191446)       <readonly/>
	I0120 13:55:21.821819 1967169 main.go:141] libmachine: (old-k8s-version-191446)     </disk>
	I0120 13:55:21.821852 1967169 main.go:141] libmachine: (old-k8s-version-191446)     <disk type='file' device='disk'>
	I0120 13:55:21.821879 1967169 main.go:141] libmachine: (old-k8s-version-191446)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 13:55:21.821895 1967169 main.go:141] libmachine: (old-k8s-version-191446)       <source file='/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/old-k8s-version-191446.rawdisk'/>
	I0120 13:55:21.821907 1967169 main.go:141] libmachine: (old-k8s-version-191446)       <target dev='hda' bus='virtio'/>
	I0120 13:55:21.821917 1967169 main.go:141] libmachine: (old-k8s-version-191446)     </disk>
	I0120 13:55:21.821928 1967169 main.go:141] libmachine: (old-k8s-version-191446)     <interface type='network'>
	I0120 13:55:21.821937 1967169 main.go:141] libmachine: (old-k8s-version-191446)       <source network='mk-old-k8s-version-191446'/>
	I0120 13:55:21.821948 1967169 main.go:141] libmachine: (old-k8s-version-191446)       <model type='virtio'/>
	I0120 13:55:21.821974 1967169 main.go:141] libmachine: (old-k8s-version-191446)     </interface>
	I0120 13:55:21.822020 1967169 main.go:141] libmachine: (old-k8s-version-191446)     <interface type='network'>
	I0120 13:55:21.822031 1967169 main.go:141] libmachine: (old-k8s-version-191446)       <source network='default'/>
	I0120 13:55:21.822035 1967169 main.go:141] libmachine: (old-k8s-version-191446)       <model type='virtio'/>
	I0120 13:55:21.822040 1967169 main.go:141] libmachine: (old-k8s-version-191446)     </interface>
	I0120 13:55:21.822047 1967169 main.go:141] libmachine: (old-k8s-version-191446)     <serial type='pty'>
	I0120 13:55:21.822066 1967169 main.go:141] libmachine: (old-k8s-version-191446)       <target port='0'/>
	I0120 13:55:21.822074 1967169 main.go:141] libmachine: (old-k8s-version-191446)     </serial>
	I0120 13:55:21.822079 1967169 main.go:141] libmachine: (old-k8s-version-191446)     <console type='pty'>
	I0120 13:55:21.822087 1967169 main.go:141] libmachine: (old-k8s-version-191446)       <target type='serial' port='0'/>
	I0120 13:55:21.822092 1967169 main.go:141] libmachine: (old-k8s-version-191446)     </console>
	I0120 13:55:21.822099 1967169 main.go:141] libmachine: (old-k8s-version-191446)     <rng model='virtio'>
	I0120 13:55:21.822105 1967169 main.go:141] libmachine: (old-k8s-version-191446)       <backend model='random'>/dev/random</backend>
	I0120 13:55:21.822116 1967169 main.go:141] libmachine: (old-k8s-version-191446)     </rng>
	I0120 13:55:21.822121 1967169 main.go:141] libmachine: (old-k8s-version-191446)     
	I0120 13:55:21.822128 1967169 main.go:141] libmachine: (old-k8s-version-191446)     
	I0120 13:55:21.822133 1967169 main.go:141] libmachine: (old-k8s-version-191446)   </devices>
	I0120 13:55:21.822139 1967169 main.go:141] libmachine: (old-k8s-version-191446) </domain>
	I0120 13:55:21.822147 1967169 main.go:141] libmachine: (old-k8s-version-191446) 
	I0120 13:55:21.826810 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:57:e6:f8 in network default
	I0120 13:55:21.827456 1967169 main.go:141] libmachine: (old-k8s-version-191446) starting domain...
	I0120 13:55:21.827480 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:21.827488 1967169 main.go:141] libmachine: (old-k8s-version-191446) ensuring networks are active...
	I0120 13:55:21.828264 1967169 main.go:141] libmachine: (old-k8s-version-191446) Ensuring network default is active
	I0120 13:55:21.828653 1967169 main.go:141] libmachine: (old-k8s-version-191446) Ensuring network mk-old-k8s-version-191446 is active
	I0120 13:55:21.829317 1967169 main.go:141] libmachine: (old-k8s-version-191446) getting domain XML...
	I0120 13:55:21.830246 1967169 main.go:141] libmachine: (old-k8s-version-191446) creating domain...
	I0120 13:55:23.249060 1967169 main.go:141] libmachine: (old-k8s-version-191446) waiting for IP...
	I0120 13:55:23.250206 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:23.250863 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 13:55:23.250957 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:23.250871 1967192 retry.go:31] will retry after 202.264582ms: waiting for domain to come up
	I0120 13:55:23.455461 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:23.456079 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 13:55:23.456111 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:23.456022 1967192 retry.go:31] will retry after 374.940164ms: waiting for domain to come up
	I0120 13:55:23.832660 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:23.833483 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 13:55:23.833535 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:23.833452 1967192 retry.go:31] will retry after 373.752504ms: waiting for domain to come up
	I0120 13:55:24.209031 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:24.209693 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 13:55:24.209727 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:24.209653 1967192 retry.go:31] will retry after 599.169374ms: waiting for domain to come up
	I0120 13:55:24.810528 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:24.811104 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 13:55:24.811167 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:24.811087 1967192 retry.go:31] will retry after 663.611218ms: waiting for domain to come up
	I0120 13:55:25.476979 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:25.477492 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 13:55:25.477528 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:25.477418 1967192 retry.go:31] will retry after 614.952298ms: waiting for domain to come up
	I0120 13:55:26.093830 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:26.094396 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 13:55:26.094516 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:26.094397 1967192 retry.go:31] will retry after 943.092022ms: waiting for domain to come up
	I0120 13:55:27.039257 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:27.040125 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 13:55:27.040152 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:27.040017 1967192 retry.go:31] will retry after 1.0450906s: waiting for domain to come up
	I0120 13:55:28.087226 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:28.087782 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 13:55:28.087876 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:28.087754 1967192 retry.go:31] will retry after 1.431088334s: waiting for domain to come up
	I0120 13:55:29.521550 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:29.522064 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 13:55:29.522093 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:29.522031 1967192 retry.go:31] will retry after 2.048834536s: waiting for domain to come up
	I0120 13:55:31.572846 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:31.573338 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 13:55:31.573377 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:31.573275 1967192 retry.go:31] will retry after 2.482933591s: waiting for domain to come up
	I0120 13:55:34.059144 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:34.059787 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 13:55:34.059820 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:34.059724 1967192 retry.go:31] will retry after 3.566613542s: waiting for domain to come up
	I0120 13:55:37.628495 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:37.629189 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 13:55:37.629220 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:37.629151 1967192 retry.go:31] will retry after 3.209411081s: waiting for domain to come up
	I0120 13:55:40.842831 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:40.843428 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 13:55:40.843463 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 13:55:40.843376 1967192 retry.go:31] will retry after 3.808564501s: waiting for domain to come up
	I0120 13:55:44.655971 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:44.656646 1967169 main.go:141] libmachine: (old-k8s-version-191446) found domain IP: 192.168.61.215
	I0120 13:55:44.656683 1967169 main.go:141] libmachine: (old-k8s-version-191446) reserving static IP address...
	I0120 13:55:44.656701 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has current primary IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:44.657092 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-191446", mac: "52:54:00:87:83:fb", ip: "192.168.61.215"} in network mk-old-k8s-version-191446
	I0120 13:55:44.743055 1967169 main.go:141] libmachine: (old-k8s-version-191446) reserved static IP address 192.168.61.215 for domain old-k8s-version-191446
	I0120 13:55:44.743089 1967169 main.go:141] libmachine: (old-k8s-version-191446) waiting for SSH...
	I0120 13:55:44.743099 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | Getting to WaitForSSH function...
	I0120 13:55:44.746222 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:44.746752 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:minikube Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:44.746779 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:44.746948 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | Using SSH client type: external
	I0120 13:55:44.746977 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa (-rw-------)
	I0120 13:55:44.747018 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 13:55:44.747030 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | About to run SSH command:
	I0120 13:55:44.747043 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | exit 0
	I0120 13:55:44.883554 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | SSH cmd err, output: <nil>: 
	I0120 13:55:44.883783 1967169 main.go:141] libmachine: (old-k8s-version-191446) KVM machine creation complete
	I0120 13:55:44.884199 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetConfigRaw
	I0120 13:55:44.884780 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 13:55:44.884966 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 13:55:44.885115 1967169 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 13:55:44.885126 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetState
	I0120 13:55:44.886648 1967169 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 13:55:44.886665 1967169 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 13:55:44.886671 1967169 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 13:55:44.886677 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 13:55:44.889008 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:44.889393 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:44.889418 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:44.889552 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 13:55:44.889759 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 13:55:44.889947 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 13:55:44.890111 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 13:55:44.890331 1967169 main.go:141] libmachine: Using SSH client type: native
	I0120 13:55:44.890533 1967169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 13:55:44.890543 1967169 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 13:55:45.010109 1967169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 13:55:45.010142 1967169 main.go:141] libmachine: Detecting the provisioner...
	I0120 13:55:45.010155 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 13:55:45.013119 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:45.013436 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:45.013469 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:45.013654 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 13:55:45.013841 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 13:55:45.014027 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 13:55:45.014188 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 13:55:45.014357 1967169 main.go:141] libmachine: Using SSH client type: native
	I0120 13:55:45.014534 1967169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 13:55:45.014545 1967169 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 13:55:45.136077 1967169 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 13:55:45.136172 1967169 main.go:141] libmachine: found compatible host: buildroot
	I0120 13:55:45.136199 1967169 main.go:141] libmachine: Provisioning with buildroot...
	I0120 13:55:45.136221 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetMachineName
	I0120 13:55:45.136482 1967169 buildroot.go:166] provisioning hostname "old-k8s-version-191446"
	I0120 13:55:45.136519 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetMachineName
	I0120 13:55:45.136705 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 13:55:45.139694 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:45.140217 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:45.140250 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:45.140467 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 13:55:45.140672 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 13:55:45.140869 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 13:55:45.141080 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 13:55:45.141303 1967169 main.go:141] libmachine: Using SSH client type: native
	I0120 13:55:45.141515 1967169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 13:55:45.141535 1967169 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-191446 && echo "old-k8s-version-191446" | sudo tee /etc/hostname
	I0120 13:55:45.284653 1967169 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-191446
	
	I0120 13:55:45.284691 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 13:55:45.288413 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:45.288917 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:45.288942 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:45.289204 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 13:55:45.289448 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 13:55:45.289633 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 13:55:45.289799 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 13:55:45.289994 1967169 main.go:141] libmachine: Using SSH client type: native
	I0120 13:55:45.290272 1967169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 13:55:45.290299 1967169 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-191446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-191446/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-191446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 13:55:45.421857 1967169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 13:55:45.421892 1967169 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 13:55:45.421940 1967169 buildroot.go:174] setting up certificates
	I0120 13:55:45.421953 1967169 provision.go:84] configureAuth start
	I0120 13:55:45.421974 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetMachineName
	I0120 13:55:45.422329 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 13:55:45.425649 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:45.426006 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:45.426037 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:45.426301 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 13:55:45.429044 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:45.429408 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:45.429441 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:45.429635 1967169 provision.go:143] copyHostCerts
	I0120 13:55:45.429713 1967169 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem, removing ...
	I0120 13:55:45.429742 1967169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem
	I0120 13:55:45.429813 1967169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 13:55:45.429949 1967169 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem, removing ...
	I0120 13:55:45.429960 1967169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem
	I0120 13:55:45.429983 1967169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 13:55:45.430049 1967169 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem, removing ...
	I0120 13:55:45.430057 1967169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem
	I0120 13:55:45.430079 1967169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 13:55:45.430191 1967169 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-191446 san=[127.0.0.1 192.168.61.215 localhost minikube old-k8s-version-191446]
	I0120 13:55:45.758693 1967169 provision.go:177] copyRemoteCerts
	I0120 13:55:45.758766 1967169 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 13:55:45.758791 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 13:55:45.762004 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:45.762491 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:45.762535 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:45.762744 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 13:55:45.762963 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 13:55:45.763134 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 13:55:45.763321 1967169 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 13:55:45.855104 1967169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 13:55:45.884267 1967169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 13:55:45.913054 1967169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 13:55:45.941084 1967169 provision.go:87] duration metric: took 519.109687ms to configureAuth
	I0120 13:55:45.941107 1967169 buildroot.go:189] setting minikube options for container-runtime
	I0120 13:55:45.941318 1967169 config.go:182] Loaded profile config "old-k8s-version-191446": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 13:55:45.941413 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 13:55:45.944292 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:45.944658 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:45.944691 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:45.944902 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 13:55:45.945168 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 13:55:45.945392 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 13:55:45.945568 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 13:55:45.945771 1967169 main.go:141] libmachine: Using SSH client type: native
	I0120 13:55:45.946026 1967169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 13:55:45.946054 1967169 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 13:55:46.206287 1967169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 13:55:46.206320 1967169 main.go:141] libmachine: Checking connection to Docker...
	I0120 13:55:46.206328 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetURL
	I0120 13:55:46.207694 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | using libvirt version 6000000
	I0120 13:55:46.210177 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:46.210668 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:46.210701 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:46.210880 1967169 main.go:141] libmachine: Docker is up and running!
	I0120 13:55:46.210895 1967169 main.go:141] libmachine: Reticulating splines...
	I0120 13:55:46.210903 1967169 client.go:171] duration metric: took 24.902825178s to LocalClient.Create
	I0120 13:55:46.210935 1967169 start.go:167] duration metric: took 24.902910498s to libmachine.API.Create "old-k8s-version-191446"
	I0120 13:55:46.210951 1967169 start.go:293] postStartSetup for "old-k8s-version-191446" (driver="kvm2")
	I0120 13:55:46.210964 1967169 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 13:55:46.210988 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 13:55:46.211266 1967169 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 13:55:46.211296 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 13:55:46.213485 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:46.213863 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:46.213894 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:46.214078 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 13:55:46.214286 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 13:55:46.214437 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 13:55:46.214551 1967169 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 13:55:46.302392 1967169 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 13:55:46.308404 1967169 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 13:55:46.308440 1967169 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 13:55:46.308523 1967169 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 13:55:46.308660 1967169 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem -> 19276722.pem in /etc/ssl/certs
	I0120 13:55:46.308807 1967169 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 13:55:46.322304 1967169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 13:55:46.352970 1967169 start.go:296] duration metric: took 142.001414ms for postStartSetup
	I0120 13:55:46.353043 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetConfigRaw
	I0120 13:55:46.353809 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 13:55:46.356572 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:46.357071 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:46.357121 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:46.357354 1967169 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/config.json ...
	I0120 13:55:46.357580 1967169 start.go:128] duration metric: took 25.073511866s to createHost
	I0120 13:55:46.357621 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 13:55:46.360046 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:46.360389 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:46.360420 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:46.360564 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 13:55:46.360773 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 13:55:46.360975 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 13:55:46.361139 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 13:55:46.361316 1967169 main.go:141] libmachine: Using SSH client type: native
	I0120 13:55:46.361563 1967169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 13:55:46.361580 1967169 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 13:55:46.489339 1967169 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381346.461371584
	
	I0120 13:55:46.489368 1967169 fix.go:216] guest clock: 1737381346.461371584
	I0120 13:55:46.489377 1967169 fix.go:229] Guest: 2025-01-20 13:55:46.461371584 +0000 UTC Remote: 2025-01-20 13:55:46.357605867 +0000 UTC m=+26.228395931 (delta=103.765717ms)
	I0120 13:55:46.489397 1967169 fix.go:200] guest clock delta is within tolerance: 103.765717ms
	I0120 13:55:46.489402 1967169 start.go:83] releasing machines lock for "old-k8s-version-191446", held for 25.205545425s
	I0120 13:55:46.489420 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 13:55:46.489733 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 13:55:46.492703 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:46.493058 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:46.493097 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:46.493316 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 13:55:46.493839 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 13:55:46.494041 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 13:55:46.494167 1967169 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 13:55:46.494233 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 13:55:46.494370 1967169 ssh_runner.go:195] Run: cat /version.json
	I0120 13:55:46.494401 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 13:55:46.497184 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:46.497560 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:46.497607 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:46.497629 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:46.497810 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 13:55:46.498009 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 13:55:46.498104 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:46.498130 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:46.498207 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 13:55:46.498370 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 13:55:46.498395 1967169 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 13:55:46.498571 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 13:55:46.498744 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 13:55:46.498936 1967169 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 13:55:46.610777 1967169 ssh_runner.go:195] Run: systemctl --version
	I0120 13:55:46.617858 1967169 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 13:55:46.783452 1967169 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 13:55:46.791725 1967169 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 13:55:46.791804 1967169 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 13:55:46.810879 1967169 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 13:55:46.810934 1967169 start.go:495] detecting cgroup driver to use...
	I0120 13:55:46.811029 1967169 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 13:55:46.829634 1967169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 13:55:46.845983 1967169 docker.go:217] disabling cri-docker service (if available) ...
	I0120 13:55:46.846063 1967169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 13:55:46.860973 1967169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 13:55:46.876973 1967169 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 13:55:47.018539 1967169 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 13:55:47.204886 1967169 docker.go:233] disabling docker service ...
	I0120 13:55:47.204970 1967169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 13:55:47.225774 1967169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 13:55:47.241075 1967169 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 13:55:47.404470 1967169 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 13:55:47.553180 1967169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 13:55:47.568137 1967169 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 13:55:47.593495 1967169 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0120 13:55:47.593582 1967169 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:55:47.607749 1967169 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 13:55:47.607828 1967169 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:55:47.619966 1967169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:55:47.632297 1967169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:55:47.644352 1967169 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 13:55:47.656744 1967169 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 13:55:47.668220 1967169 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 13:55:47.668301 1967169 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 13:55:47.683386 1967169 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 13:55:47.694315 1967169 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 13:55:47.841072 1967169 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 13:55:47.958726 1967169 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 13:55:47.958812 1967169 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 13:55:47.965514 1967169 start.go:563] Will wait 60s for crictl version
	I0120 13:55:47.965592 1967169 ssh_runner.go:195] Run: which crictl
	I0120 13:55:47.970495 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 13:55:48.025267 1967169 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 13:55:48.025350 1967169 ssh_runner.go:195] Run: crio --version
	I0120 13:55:48.067721 1967169 ssh_runner.go:195] Run: crio --version
	I0120 13:55:48.104531 1967169 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0120 13:55:48.106124 1967169 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 13:55:48.109373 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:48.109675 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 14:55:37 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 13:55:48.109709 1967169 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 13:55:48.110110 1967169 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0120 13:55:48.115192 1967169 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 13:55:48.129915 1967169 kubeadm.go:883] updating cluster {Name:old-k8s-version-191446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 13:55:48.130067 1967169 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 13:55:48.130132 1967169 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 13:55:48.167000 1967169 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 13:55:48.167072 1967169 ssh_runner.go:195] Run: which lz4
	I0120 13:55:48.173178 1967169 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 13:55:48.177808 1967169 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 13:55:48.177856 1967169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0120 13:55:50.109502 1967169 crio.go:462] duration metric: took 1.936353737s to copy over tarball
	I0120 13:55:50.109598 1967169 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 13:55:52.887879 1967169 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.778246279s)
	I0120 13:55:52.887932 1967169 crio.go:469] duration metric: took 2.778386909s to extract the tarball
	I0120 13:55:52.887952 1967169 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 13:55:52.932150 1967169 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 13:55:53.005701 1967169 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 13:55:53.005737 1967169 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 13:55:53.005795 1967169 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:55:53.005830 1967169 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 13:55:53.005880 1967169 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0120 13:55:53.005920 1967169 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 13:55:53.005923 1967169 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0120 13:55:53.005990 1967169 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0120 13:55:53.006109 1967169 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 13:55:53.006163 1967169 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 13:55:53.007649 1967169 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 13:55:53.007675 1967169 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0120 13:55:53.007681 1967169 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0120 13:55:53.007654 1967169 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 13:55:53.007733 1967169 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0120 13:55:53.008050 1967169 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 13:55:53.008069 1967169 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 13:55:53.008117 1967169 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:55:53.189254 1967169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 13:55:53.196469 1967169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0120 13:55:53.203944 1967169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0120 13:55:53.209191 1967169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0120 13:55:53.214657 1967169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0120 13:55:53.222953 1967169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0120 13:55:53.266928 1967169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0120 13:55:53.292483 1967169 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0120 13:55:53.292530 1967169 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 13:55:53.292585 1967169 ssh_runner.go:195] Run: which crictl
	I0120 13:55:53.333536 1967169 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0120 13:55:53.333625 1967169 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 13:55:53.333700 1967169 ssh_runner.go:195] Run: which crictl
	I0120 13:55:53.345481 1967169 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0120 13:55:53.345540 1967169 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 13:55:53.345614 1967169 ssh_runner.go:195] Run: which crictl
	I0120 13:55:53.366064 1967169 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0120 13:55:53.366121 1967169 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0120 13:55:53.366173 1967169 ssh_runner.go:195] Run: which crictl
	I0120 13:55:53.390452 1967169 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0120 13:55:53.390519 1967169 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 13:55:53.390574 1967169 ssh_runner.go:195] Run: which crictl
	I0120 13:55:53.391047 1967169 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0120 13:55:53.391090 1967169 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0120 13:55:53.391134 1967169 ssh_runner.go:195] Run: which crictl
	I0120 13:55:53.399622 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 13:55:53.399621 1967169 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0120 13:55:53.399732 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 13:55:53.399759 1967169 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0120 13:55:53.399793 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 13:55:53.399814 1967169 ssh_runner.go:195] Run: which crictl
	I0120 13:55:53.399702 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 13:55:53.399836 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 13:55:53.399864 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 13:55:53.448217 1967169 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:55:53.541162 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 13:55:53.541267 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 13:55:53.578035 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 13:55:53.578235 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 13:55:53.578283 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 13:55:53.578339 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 13:55:53.578375 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 13:55:53.761174 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 13:55:53.761295 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 13:55:53.763633 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 13:55:53.763819 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 13:55:53.767617 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 13:55:53.768795 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 13:55:53.768815 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 13:55:53.922981 1967169 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0120 13:55:53.923041 1967169 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 13:55:53.923066 1967169 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0120 13:55:53.923104 1967169 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0120 13:55:53.923048 1967169 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0120 13:55:53.923171 1967169 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0120 13:55:53.928017 1967169 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0120 13:55:53.960449 1967169 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0120 13:55:53.960526 1967169 cache_images.go:92] duration metric: took 954.770041ms to LoadCachedImages
	W0120 13:55:53.960615 1967169 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0120 13:55:53.960631 1967169 kubeadm.go:934] updating node { 192.168.61.215 8443 v1.20.0 crio true true} ...
	I0120 13:55:53.960742 1967169 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-191446 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 13:55:53.960840 1967169 ssh_runner.go:195] Run: crio config
	I0120 13:55:54.016304 1967169 cni.go:84] Creating CNI manager for ""
	I0120 13:55:54.016337 1967169 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 13:55:54.016351 1967169 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 13:55:54.016386 1967169 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.215 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-191446 NodeName:old-k8s-version-191446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 13:55:54.016555 1967169 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-191446"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 13:55:54.016639 1967169 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 13:55:54.027649 1967169 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 13:55:54.027737 1967169 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 13:55:54.039032 1967169 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0120 13:55:54.060472 1967169 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 13:55:54.080258 1967169 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0120 13:55:54.098811 1967169 ssh_runner.go:195] Run: grep 192.168.61.215	control-plane.minikube.internal$ /etc/hosts
	I0120 13:55:54.102971 1967169 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 13:55:54.116688 1967169 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 13:55:54.241315 1967169 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 13:55:54.259847 1967169 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446 for IP: 192.168.61.215
	I0120 13:55:54.259884 1967169 certs.go:194] generating shared ca certs ...
	I0120 13:55:54.259907 1967169 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:55:54.260106 1967169 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 13:55:54.260173 1967169 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 13:55:54.260186 1967169 certs.go:256] generating profile certs ...
	I0120 13:55:54.260273 1967169 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.key
	I0120 13:55:54.260305 1967169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt with IP's: []
	I0120 13:55:54.424028 1967169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt ...
	I0120 13:55:54.424062 1967169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt: {Name:mke0251f3325ebccf5703d44cd3dcc21f864bde8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:55:54.424235 1967169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.key ...
	I0120 13:55:54.424262 1967169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.key: {Name:mk0f932398796d42166e2f826db15634301007a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:55:54.424348 1967169 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.key.d5f4b946
	I0120 13:55:54.424366 1967169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.crt.d5f4b946 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.215]
	I0120 13:55:54.528608 1967169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.crt.d5f4b946 ...
	I0120 13:55:54.528664 1967169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.crt.d5f4b946: {Name:mkc260c1f63803bc6c509c47b51b028dbca7a96d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:55:54.528871 1967169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.key.d5f4b946 ...
	I0120 13:55:54.528897 1967169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.key.d5f4b946: {Name:mk82337a52da360ef09275cd7422ca144d430abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:55:54.529032 1967169 certs.go:381] copying /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.crt.d5f4b946 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.crt
	I0120 13:55:54.529119 1967169 certs.go:385] copying /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.key.d5f4b946 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.key
	I0120 13:55:54.529171 1967169 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.key
	I0120 13:55:54.529188 1967169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.crt with IP's: []
	I0120 13:55:54.719602 1967169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.crt ...
	I0120 13:55:54.719638 1967169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.crt: {Name:mkf3ce825f6cd72ce46eca661cd2511ac9671553 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:55:54.719807 1967169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.key ...
	I0120 13:55:54.719819 1967169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.key: {Name:mkaf55c3b3a106252e04b8678ffc4524aa447d63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:55:54.719989 1967169 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem (1338 bytes)
	W0120 13:55:54.720025 1967169 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672_empty.pem, impossibly tiny 0 bytes
	I0120 13:55:54.720036 1967169 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 13:55:54.720058 1967169 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 13:55:54.720091 1967169 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 13:55:54.720113 1967169 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 13:55:54.720150 1967169 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 13:55:54.720731 1967169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 13:55:54.751090 1967169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 13:55:54.777838 1967169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 13:55:54.805905 1967169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 13:55:54.834241 1967169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 13:55:54.861649 1967169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 13:55:54.886615 1967169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 13:55:54.911366 1967169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 13:55:54.939666 1967169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /usr/share/ca-certificates/19276722.pem (1708 bytes)
	I0120 13:55:54.968638 1967169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 13:55:54.996570 1967169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem --> /usr/share/ca-certificates/1927672.pem (1338 bytes)
	I0120 13:55:55.021985 1967169 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 13:55:55.040483 1967169 ssh_runner.go:195] Run: openssl version
	I0120 13:55:55.047736 1967169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19276722.pem && ln -fs /usr/share/ca-certificates/19276722.pem /etc/ssl/certs/19276722.pem"
	I0120 13:55:55.061013 1967169 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19276722.pem
	I0120 13:55:55.066001 1967169 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:58 /usr/share/ca-certificates/19276722.pem
	I0120 13:55:55.066085 1967169 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19276722.pem
	I0120 13:55:55.073323 1967169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19276722.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 13:55:55.086044 1967169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 13:55:55.100135 1967169 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:55:55.105253 1967169 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:55:55.105331 1967169 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:55:55.111820 1967169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 13:55:55.126336 1967169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1927672.pem && ln -fs /usr/share/ca-certificates/1927672.pem /etc/ssl/certs/1927672.pem"
	I0120 13:55:55.151222 1967169 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1927672.pem
	I0120 13:55:55.156573 1967169 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:58 /usr/share/ca-certificates/1927672.pem
	I0120 13:55:55.156652 1967169 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1927672.pem
	I0120 13:55:55.165746 1967169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1927672.pem /etc/ssl/certs/51391683.0"
	I0120 13:55:55.180378 1967169 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 13:55:55.186067 1967169 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 13:55:55.186136 1967169 kubeadm.go:392] StartCluster: {Name:old-k8s-version-191446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:55:55.186249 1967169 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 13:55:55.186313 1967169 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 13:55:55.250742 1967169 cri.go:89] found id: ""
	I0120 13:55:55.250854 1967169 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 13:55:55.261938 1967169 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 13:55:55.272673 1967169 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 13:55:55.286422 1967169 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 13:55:55.286445 1967169 kubeadm.go:157] found existing configuration files:
	
	I0120 13:55:55.286495 1967169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 13:55:55.299224 1967169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 13:55:55.299306 1967169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 13:55:55.310999 1967169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 13:55:55.321540 1967169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 13:55:55.321622 1967169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 13:55:55.332572 1967169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 13:55:55.342908 1967169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 13:55:55.342985 1967169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 13:55:55.354234 1967169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 13:55:55.364444 1967169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 13:55:55.364523 1967169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 13:55:55.374792 1967169 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 13:55:55.508776 1967169 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 13:55:55.508897 1967169 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 13:55:55.682124 1967169 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 13:55:55.682297 1967169 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 13:55:55.682434 1967169 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 13:55:55.879174 1967169 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 13:55:55.913759 1967169 out.go:235]   - Generating certificates and keys ...
	I0120 13:55:55.913919 1967169 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 13:55:55.914063 1967169 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 13:55:55.964949 1967169 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 13:55:56.169230 1967169 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 13:55:56.457722 1967169 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 13:55:56.564071 1967169 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 13:55:56.880750 1967169 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 13:55:56.881011 1967169 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-191446] and IPs [192.168.61.215 127.0.0.1 ::1]
	I0120 13:55:56.997232 1967169 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 13:55:56.997486 1967169 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-191446] and IPs [192.168.61.215 127.0.0.1 ::1]
	I0120 13:55:57.211464 1967169 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 13:55:57.332749 1967169 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 13:55:57.619622 1967169 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 13:55:57.619730 1967169 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 13:55:57.732365 1967169 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 13:55:57.891441 1967169 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 13:55:58.227322 1967169 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 13:55:58.383446 1967169 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 13:55:58.405718 1967169 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 13:55:58.407987 1967169 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 13:55:58.408968 1967169 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 13:55:58.582866 1967169 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 13:55:58.584885 1967169 out.go:235]   - Booting up control plane ...
	I0120 13:55:58.585033 1967169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 13:55:58.606804 1967169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 13:55:58.609001 1967169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 13:55:58.610443 1967169 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 13:55:58.618211 1967169 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 13:56:38.611982 1967169 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 13:56:38.612184 1967169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:56:38.612486 1967169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:56:43.613001 1967169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:56:43.613227 1967169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:56:53.612157 1967169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:56:53.612420 1967169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:57:13.611480 1967169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:57:13.611724 1967169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:57:53.615368 1967169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:57:53.615647 1967169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:57:53.615675 1967169 kubeadm.go:310] 
	I0120 13:57:53.615734 1967169 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 13:57:53.615798 1967169 kubeadm.go:310] 		timed out waiting for the condition
	I0120 13:57:53.615806 1967169 kubeadm.go:310] 
	I0120 13:57:53.615863 1967169 kubeadm.go:310] 	This error is likely caused by:
	I0120 13:57:53.615924 1967169 kubeadm.go:310] 		- The kubelet is not running
	I0120 13:57:53.616117 1967169 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 13:57:53.616131 1967169 kubeadm.go:310] 
	I0120 13:57:53.616269 1967169 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 13:57:53.616312 1967169 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 13:57:53.616354 1967169 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 13:57:53.616361 1967169 kubeadm.go:310] 
	I0120 13:57:53.616503 1967169 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 13:57:53.616646 1967169 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 13:57:53.616656 1967169 kubeadm.go:310] 
	I0120 13:57:53.616795 1967169 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 13:57:53.616921 1967169 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 13:57:53.617040 1967169 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 13:57:53.617137 1967169 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 13:57:53.617187 1967169 kubeadm.go:310] 
	I0120 13:57:53.617325 1967169 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 13:57:53.617445 1967169 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 13:57:53.617556 1967169 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0120 13:57:53.617721 1967169 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-191446] and IPs [192.168.61.215 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-191446] and IPs [192.168.61.215 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-191446] and IPs [192.168.61.215 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-191446] and IPs [192.168.61.215 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0120 13:57:53.617773 1967169 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 13:57:55.919261 1967169 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.301449355s)
	I0120 13:57:55.919390 1967169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:57:55.938129 1967169 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 13:57:55.952121 1967169 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 13:57:55.952146 1967169 kubeadm.go:157] found existing configuration files:
	
	I0120 13:57:55.952201 1967169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 13:57:55.964375 1967169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 13:57:55.964438 1967169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 13:57:55.975340 1967169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 13:57:55.985132 1967169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 13:57:55.985209 1967169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 13:57:55.995645 1967169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 13:57:56.006414 1967169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 13:57:56.006493 1967169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 13:57:56.021114 1967169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 13:57:56.035115 1967169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 13:57:56.035196 1967169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 13:57:56.046387 1967169 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 13:57:56.120924 1967169 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 13:57:56.121016 1967169 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 13:57:56.296535 1967169 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 13:57:56.296687 1967169 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 13:57:56.296833 1967169 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 13:57:56.528646 1967169 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 13:57:56.718656 1967169 out.go:235]   - Generating certificates and keys ...
	I0120 13:57:56.718810 1967169 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 13:57:56.718903 1967169 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 13:57:56.719022 1967169 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 13:57:56.719098 1967169 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 13:57:56.719229 1967169 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 13:57:56.719316 1967169 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 13:57:56.719402 1967169 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 13:57:56.719477 1967169 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 13:57:56.719580 1967169 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 13:57:56.719701 1967169 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 13:57:56.719760 1967169 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 13:57:56.719849 1967169 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 13:57:56.719939 1967169 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 13:57:56.727755 1967169 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 13:57:56.942623 1967169 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 13:57:57.179596 1967169 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 13:57:57.197821 1967169 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 13:57:57.199025 1967169 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 13:57:57.199103 1967169 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 13:57:57.362184 1967169 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 13:57:57.416102 1967169 out.go:235]   - Booting up control plane ...
	I0120 13:57:57.416316 1967169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 13:57:57.416452 1967169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 13:57:57.416555 1967169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 13:57:57.416662 1967169 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 13:57:57.416882 1967169 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 13:58:37.381488 1967169 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 13:58:37.381657 1967169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:58:37.381965 1967169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:58:42.383087 1967169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:58:42.383314 1967169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:58:52.384038 1967169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:58:52.384291 1967169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:59:12.383365 1967169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:59:12.383652 1967169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:59:52.383434 1967169 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 13:59:52.383693 1967169 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 13:59:52.383712 1967169 kubeadm.go:310] 
	I0120 13:59:52.383781 1967169 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 13:59:52.383852 1967169 kubeadm.go:310] 		timed out waiting for the condition
	I0120 13:59:52.383871 1967169 kubeadm.go:310] 
	I0120 13:59:52.383930 1967169 kubeadm.go:310] 	This error is likely caused by:
	I0120 13:59:52.383996 1967169 kubeadm.go:310] 		- The kubelet is not running
	I0120 13:59:52.384157 1967169 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 13:59:52.384168 1967169 kubeadm.go:310] 
	I0120 13:59:52.384356 1967169 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 13:59:52.384421 1967169 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 13:59:52.384469 1967169 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 13:59:52.384478 1967169 kubeadm.go:310] 
	I0120 13:59:52.384637 1967169 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 13:59:52.384773 1967169 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 13:59:52.384783 1967169 kubeadm.go:310] 
	I0120 13:59:52.384927 1967169 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 13:59:52.385057 1967169 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 13:59:52.385171 1967169 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 13:59:52.385286 1967169 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 13:59:52.385298 1967169 kubeadm.go:310] 
	I0120 13:59:52.386216 1967169 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 13:59:52.386337 1967169 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 13:59:52.386434 1967169 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 13:59:52.386513 1967169 kubeadm.go:394] duration metric: took 3m57.200382652s to StartCluster
	I0120 13:59:52.386571 1967169 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 13:59:52.386663 1967169 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 13:59:52.445478 1967169 cri.go:89] found id: ""
	I0120 13:59:52.445517 1967169 logs.go:282] 0 containers: []
	W0120 13:59:52.445530 1967169 logs.go:284] No container was found matching "kube-apiserver"
	I0120 13:59:52.445539 1967169 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 13:59:52.445615 1967169 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 13:59:52.496705 1967169 cri.go:89] found id: ""
	I0120 13:59:52.496739 1967169 logs.go:282] 0 containers: []
	W0120 13:59:52.496750 1967169 logs.go:284] No container was found matching "etcd"
	I0120 13:59:52.496758 1967169 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 13:59:52.496835 1967169 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 13:59:52.542385 1967169 cri.go:89] found id: ""
	I0120 13:59:52.542420 1967169 logs.go:282] 0 containers: []
	W0120 13:59:52.542431 1967169 logs.go:284] No container was found matching "coredns"
	I0120 13:59:52.542441 1967169 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 13:59:52.542518 1967169 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 13:59:52.586427 1967169 cri.go:89] found id: ""
	I0120 13:59:52.586464 1967169 logs.go:282] 0 containers: []
	W0120 13:59:52.586473 1967169 logs.go:284] No container was found matching "kube-scheduler"
	I0120 13:59:52.586479 1967169 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 13:59:52.586541 1967169 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 13:59:52.625607 1967169 cri.go:89] found id: ""
	I0120 13:59:52.625644 1967169 logs.go:282] 0 containers: []
	W0120 13:59:52.625657 1967169 logs.go:284] No container was found matching "kube-proxy"
	I0120 13:59:52.625667 1967169 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 13:59:52.625745 1967169 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 13:59:52.662132 1967169 cri.go:89] found id: ""
	I0120 13:59:52.662169 1967169 logs.go:282] 0 containers: []
	W0120 13:59:52.662181 1967169 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 13:59:52.662190 1967169 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 13:59:52.662259 1967169 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 13:59:52.708118 1967169 cri.go:89] found id: ""
	I0120 13:59:52.708146 1967169 logs.go:282] 0 containers: []
	W0120 13:59:52.708155 1967169 logs.go:284] No container was found matching "kindnet"
	I0120 13:59:52.708166 1967169 logs.go:123] Gathering logs for kubelet ...
	I0120 13:59:52.708183 1967169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 13:59:52.759246 1967169 logs.go:123] Gathering logs for dmesg ...
	I0120 13:59:52.759295 1967169 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 13:59:52.773898 1967169 logs.go:123] Gathering logs for describe nodes ...
	I0120 13:59:52.773931 1967169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 13:59:52.901102 1967169 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 13:59:52.901138 1967169 logs.go:123] Gathering logs for CRI-O ...
	I0120 13:59:52.901160 1967169 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 13:59:53.005721 1967169 logs.go:123] Gathering logs for container status ...
	I0120 13:59:53.005772 1967169 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0120 13:59:53.055174 1967169 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0120 13:59:53.055285 1967169 out.go:270] * 
	* 
	W0120 13:59:53.055379 1967169 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 13:59:53.055400 1967169 out.go:270] * 
	* 
	W0120 13:59:53.056316 1967169 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 13:59:53.059339 1967169 out.go:201] 
	W0120 13:59:53.060706 1967169 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 13:59:53.060751 1967169 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0120 13:59:53.060778 1967169 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0120 13:59:53.062591 1967169 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-191446 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191446 -n old-k8s-version-191446
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191446 -n old-k8s-version-191446: exit status 6 (268.502265ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 13:59:53.370350 1970308 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-191446" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-191446" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (273.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (1600.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-648067 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-648067 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: signal: killed (26m37.726633876s)

                                                
                                                
-- stdout --
	* [no-preload-648067] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "no-preload-648067" primary control-plane node in "no-preload-648067" cluster
	* Restarting existing kvm2 VM for "no-preload-648067" ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-648067 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:59:10.187595 1969949 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:59:10.187702 1969949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:59:10.187710 1969949 out.go:358] Setting ErrFile to fd 2...
	I0120 13:59:10.187714 1969949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:59:10.187891 1969949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 13:59:10.188460 1969949 out.go:352] Setting JSON to false
	I0120 13:59:10.189455 1969949 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":20496,"bootTime":1737361054,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 13:59:10.189569 1969949 start.go:139] virtualization: kvm guest
	I0120 13:59:10.191994 1969949 out.go:177] * [no-preload-648067] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 13:59:10.193591 1969949 notify.go:220] Checking for updates...
	I0120 13:59:10.193614 1969949 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 13:59:10.195261 1969949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 13:59:10.196811 1969949 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 13:59:10.198093 1969949 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 13:59:10.199452 1969949 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 13:59:10.200908 1969949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 13:59:10.202704 1969949 config.go:182] Loaded profile config "no-preload-648067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 13:59:10.203076 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:59:10.203132 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:59:10.219358 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45361
	I0120 13:59:10.220072 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:59:10.220809 1969949 main.go:141] libmachine: Using API Version  1
	I0120 13:59:10.220844 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:59:10.221233 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:59:10.221429 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 13:59:10.221787 1969949 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 13:59:10.222136 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:59:10.222179 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:59:10.239666 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34015
	I0120 13:59:10.240118 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:59:10.240708 1969949 main.go:141] libmachine: Using API Version  1
	I0120 13:59:10.240733 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:59:10.241066 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:59:10.241270 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 13:59:10.280315 1969949 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 13:59:10.281750 1969949 start.go:297] selected driver: kvm2
	I0120 13:59:10.281772 1969949 start.go:901] validating driver "kvm2" against &{Name:no-preload-648067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-648067 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:59:10.281915 1969949 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 13:59:10.282584 1969949 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:59:10.282714 1969949 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-1920423/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 13:59:10.298437 1969949 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 13:59:10.298871 1969949 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 13:59:10.298909 1969949 cni.go:84] Creating CNI manager for ""
	I0120 13:59:10.298960 1969949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 13:59:10.298994 1969949 start.go:340] cluster config:
	{Name:no-preload-648067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-648067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:59:10.299110 1969949 iso.go:125] acquiring lock: {Name:mk18c81b0efa5cc8efe9d47dc52684752f206ae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:59:10.300937 1969949 out.go:177] * Starting "no-preload-648067" primary control-plane node in "no-preload-648067" cluster
	I0120 13:59:10.302229 1969949 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 13:59:10.302379 1969949 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/config.json ...
	I0120 13:59:10.302452 1969949 cache.go:107] acquiring lock: {Name:mkf73d087f87e87d169fc5c448a6f5b33e5144f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:59:10.302494 1969949 cache.go:107] acquiring lock: {Name:mkf5af5c59e0ef0cc9b71e06a23edac86459e23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:59:10.302494 1969949 cache.go:107] acquiring lock: {Name:mk9d1cfc1a35e31a8be2bc7b01bbda5df61d9667 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:59:10.302551 1969949 cache.go:115] /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0120 13:59:10.302566 1969949 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 121.04µs
	I0120 13:59:10.302591 1969949 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0120 13:59:10.302629 1969949 cache.go:115] /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 exists
	I0120 13:59:10.302646 1969949 cache.go:115] /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0120 13:59:10.302641 1969949 cache.go:107] acquiring lock: {Name:mk61ff52e6c796534c65eaa3db260dc9e19c73cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:59:10.302660 1969949 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 181.314µs
	I0120 13:59:10.302666 1969949 start.go:360] acquireMachinesLock for no-preload-648067: {Name:mkbf148c6fec0c722eed081be3cef9d0990100c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 13:59:10.302674 1969949 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0120 13:59:10.302624 1969949 cache.go:107] acquiring lock: {Name:mk7edfce3a6418572de4e61db80b53b4ccd131a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:59:10.302689 1969949 cache.go:115] /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0120 13:59:10.302699 1969949 start.go:364] duration metric: took 19.29µs to acquireMachinesLock for "no-preload-648067"
	I0120 13:59:10.302664 1969949 cache.go:107] acquiring lock: {Name:mk12e87e6b0ba577c42d601193a25a664b6eada1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:59:10.302714 1969949 start.go:96] Skipping create...Using existing machine configuration
	I0120 13:59:10.302657 1969949 cache.go:107] acquiring lock: {Name:mk7b02aafbeeabe44131fe3387947b26b923476c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:59:10.302644 1969949 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.0" -> "/home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0" took 164.273µs
	I0120 13:59:10.302770 1969949 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.0 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 succeeded
	I0120 13:59:10.302721 1969949 fix.go:54] fixHost starting: 
	I0120 13:59:10.302664 1969949 cache.go:107] acquiring lock: {Name:mk9362e1703e54d1c6d9a7b8ffdbf8eb230097be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 13:59:10.302804 1969949 cache.go:115] /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 exists
	I0120 13:59:10.302829 1969949 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.0" -> "/home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0" took 239.889µs
	I0120 13:59:10.302832 1969949 cache.go:115] /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0120 13:59:10.302839 1969949 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.0 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 succeeded
	I0120 13:59:10.302833 1969949 cache.go:115] /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 exists
	I0120 13:59:10.302703 1969949 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 66.178µs
	I0120 13:59:10.302863 1969949 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0120 13:59:10.302849 1969949 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 327.538µs
	I0120 13:59:10.302868 1969949 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.0" -> "/home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0" took 190.513µs
	I0120 13:59:10.302876 1969949 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0120 13:59:10.302879 1969949 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.0 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 succeeded
	I0120 13:59:10.302894 1969949 cache.go:115] /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 exists
	I0120 13:59:10.302919 1969949 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.0" -> "/home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0" took 319.086µs
	I0120 13:59:10.302938 1969949 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.0 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 succeeded
	I0120 13:59:10.302952 1969949 cache.go:87] Successfully saved all images to host disk.
	I0120 13:59:10.303195 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:59:10.303251 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:59:10.318829 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42513
	I0120 13:59:10.319277 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:59:10.319975 1969949 main.go:141] libmachine: Using API Version  1
	I0120 13:59:10.320023 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:59:10.320353 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:59:10.320579 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 13:59:10.320751 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 13:59:10.322489 1969949 fix.go:112] recreateIfNeeded on no-preload-648067: state=Stopped err=<nil>
	I0120 13:59:10.322520 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	W0120 13:59:10.322703 1969949 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 13:59:10.324925 1969949 out.go:177] * Restarting existing kvm2 VM for "no-preload-648067" ...
	I0120 13:59:10.326364 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Start
	I0120 13:59:10.326632 1969949 main.go:141] libmachine: (no-preload-648067) starting domain...
	I0120 13:59:10.326657 1969949 main.go:141] libmachine: (no-preload-648067) ensuring networks are active...
	I0120 13:59:10.327599 1969949 main.go:141] libmachine: (no-preload-648067) Ensuring network default is active
	I0120 13:59:10.328121 1969949 main.go:141] libmachine: (no-preload-648067) Ensuring network mk-no-preload-648067 is active
	I0120 13:59:10.328539 1969949 main.go:141] libmachine: (no-preload-648067) getting domain XML...
	I0120 13:59:10.329433 1969949 main.go:141] libmachine: (no-preload-648067) creating domain...
	I0120 13:59:11.583493 1969949 main.go:141] libmachine: (no-preload-648067) waiting for IP...
	I0120 13:59:11.584467 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:11.584964 1969949 main.go:141] libmachine: (no-preload-648067) DBG | unable to find current IP address of domain no-preload-648067 in network mk-no-preload-648067
	I0120 13:59:11.584994 1969949 main.go:141] libmachine: (no-preload-648067) DBG | I0120 13:59:11.584918 1969984 retry.go:31] will retry after 298.58597ms: waiting for domain to come up
	I0120 13:59:11.885827 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:11.886471 1969949 main.go:141] libmachine: (no-preload-648067) DBG | unable to find current IP address of domain no-preload-648067 in network mk-no-preload-648067
	I0120 13:59:11.886499 1969949 main.go:141] libmachine: (no-preload-648067) DBG | I0120 13:59:11.886428 1969984 retry.go:31] will retry after 345.259869ms: waiting for domain to come up
	I0120 13:59:12.233111 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:12.233796 1969949 main.go:141] libmachine: (no-preload-648067) DBG | unable to find current IP address of domain no-preload-648067 in network mk-no-preload-648067
	I0120 13:59:12.233838 1969949 main.go:141] libmachine: (no-preload-648067) DBG | I0120 13:59:12.233753 1969984 retry.go:31] will retry after 394.195886ms: waiting for domain to come up
	I0120 13:59:12.629406 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:12.629884 1969949 main.go:141] libmachine: (no-preload-648067) DBG | unable to find current IP address of domain no-preload-648067 in network mk-no-preload-648067
	I0120 13:59:12.629917 1969949 main.go:141] libmachine: (no-preload-648067) DBG | I0120 13:59:12.629837 1969984 retry.go:31] will retry after 411.08242ms: waiting for domain to come up
	I0120 13:59:13.042412 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:13.043064 1969949 main.go:141] libmachine: (no-preload-648067) DBG | unable to find current IP address of domain no-preload-648067 in network mk-no-preload-648067
	I0120 13:59:13.043093 1969949 main.go:141] libmachine: (no-preload-648067) DBG | I0120 13:59:13.043014 1969984 retry.go:31] will retry after 511.531492ms: waiting for domain to come up
	I0120 13:59:13.555785 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:13.556339 1969949 main.go:141] libmachine: (no-preload-648067) DBG | unable to find current IP address of domain no-preload-648067 in network mk-no-preload-648067
	I0120 13:59:13.556375 1969949 main.go:141] libmachine: (no-preload-648067) DBG | I0120 13:59:13.556291 1969984 retry.go:31] will retry after 574.557113ms: waiting for domain to come up
	I0120 13:59:14.133074 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:14.133658 1969949 main.go:141] libmachine: (no-preload-648067) DBG | unable to find current IP address of domain no-preload-648067 in network mk-no-preload-648067
	I0120 13:59:14.133690 1969949 main.go:141] libmachine: (no-preload-648067) DBG | I0120 13:59:14.133615 1969984 retry.go:31] will retry after 785.380145ms: waiting for domain to come up
	I0120 13:59:14.920642 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:14.921230 1969949 main.go:141] libmachine: (no-preload-648067) DBG | unable to find current IP address of domain no-preload-648067 in network mk-no-preload-648067
	I0120 13:59:14.921312 1969949 main.go:141] libmachine: (no-preload-648067) DBG | I0120 13:59:14.921206 1969984 retry.go:31] will retry after 1.041408912s: waiting for domain to come up
	I0120 13:59:15.964644 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:15.965149 1969949 main.go:141] libmachine: (no-preload-648067) DBG | unable to find current IP address of domain no-preload-648067 in network mk-no-preload-648067
	I0120 13:59:15.965178 1969949 main.go:141] libmachine: (no-preload-648067) DBG | I0120 13:59:15.965114 1969984 retry.go:31] will retry after 1.578825178s: waiting for domain to come up
	I0120 13:59:17.546110 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:17.546595 1969949 main.go:141] libmachine: (no-preload-648067) DBG | unable to find current IP address of domain no-preload-648067 in network mk-no-preload-648067
	I0120 13:59:17.546631 1969949 main.go:141] libmachine: (no-preload-648067) DBG | I0120 13:59:17.546577 1969984 retry.go:31] will retry after 2.0478689s: waiting for domain to come up
	I0120 13:59:19.596067 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:19.596568 1969949 main.go:141] libmachine: (no-preload-648067) DBG | unable to find current IP address of domain no-preload-648067 in network mk-no-preload-648067
	I0120 13:59:19.596597 1969949 main.go:141] libmachine: (no-preload-648067) DBG | I0120 13:59:19.596545 1969984 retry.go:31] will retry after 2.676220246s: waiting for domain to come up
	I0120 13:59:22.276038 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:22.276584 1969949 main.go:141] libmachine: (no-preload-648067) DBG | unable to find current IP address of domain no-preload-648067 in network mk-no-preload-648067
	I0120 13:59:22.276620 1969949 main.go:141] libmachine: (no-preload-648067) DBG | I0120 13:59:22.276533 1969984 retry.go:31] will retry after 2.473034474s: waiting for domain to come up
	I0120 13:59:24.751904 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:24.752454 1969949 main.go:141] libmachine: (no-preload-648067) DBG | unable to find current IP address of domain no-preload-648067 in network mk-no-preload-648067
	I0120 13:59:24.752480 1969949 main.go:141] libmachine: (no-preload-648067) DBG | I0120 13:59:24.752417 1969984 retry.go:31] will retry after 3.90240771s: waiting for domain to come up
	I0120 13:59:28.659165 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:28.659721 1969949 main.go:141] libmachine: (no-preload-648067) found domain IP: 192.168.39.76
	I0120 13:59:28.659761 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has current primary IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:28.659772 1969949 main.go:141] libmachine: (no-preload-648067) reserving static IP address...
	I0120 13:59:28.660260 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "no-preload-648067", mac: "52:54:00:cb:3b:04", ip: "192.168.39.76"} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 13:59:28.660293 1969949 main.go:141] libmachine: (no-preload-648067) reserved static IP address 192.168.39.76 for domain no-preload-648067
	I0120 13:59:28.660312 1969949 main.go:141] libmachine: (no-preload-648067) DBG | skip adding static IP to network mk-no-preload-648067 - found existing host DHCP lease matching {name: "no-preload-648067", mac: "52:54:00:cb:3b:04", ip: "192.168.39.76"}
	I0120 13:59:28.660330 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Getting to WaitForSSH function...
	I0120 13:59:28.660341 1969949 main.go:141] libmachine: (no-preload-648067) waiting for SSH...
	I0120 13:59:28.662520 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:28.662954 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 13:59:28.662979 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:28.663083 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Using SSH client type: external
	I0120 13:59:28.663127 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa (-rw-------)
	I0120 13:59:28.663165 1969949 main.go:141] libmachine: (no-preload-648067) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 13:59:28.663182 1969949 main.go:141] libmachine: (no-preload-648067) DBG | About to run SSH command:
	I0120 13:59:28.663204 1969949 main.go:141] libmachine: (no-preload-648067) DBG | exit 0
	I0120 13:59:28.795079 1969949 main.go:141] libmachine: (no-preload-648067) DBG | SSH cmd err, output: <nil>: 
	I0120 13:59:28.795565 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetConfigRaw
	I0120 13:59:28.796369 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetIP
	I0120 13:59:28.798801 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:28.799220 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 13:59:28.799250 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:28.799554 1969949 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/config.json ...
	I0120 13:59:28.799800 1969949 machine.go:93] provisionDockerMachine start ...
	I0120 13:59:28.799824 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 13:59:28.800085 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 13:59:28.802512 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:28.802910 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 13:59:28.802952 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:28.803101 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 13:59:28.803269 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 13:59:28.803428 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 13:59:28.803602 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 13:59:28.803765 1969949 main.go:141] libmachine: Using SSH client type: native
	I0120 13:59:28.803985 1969949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0120 13:59:28.804009 1969949 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 13:59:28.915355 1969949 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 13:59:28.915389 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetMachineName
	I0120 13:59:28.915614 1969949 buildroot.go:166] provisioning hostname "no-preload-648067"
	I0120 13:59:28.915641 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetMachineName
	I0120 13:59:28.915855 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 13:59:28.918660 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:28.919030 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 13:59:28.919056 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:28.919257 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 13:59:28.919441 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 13:59:28.919567 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 13:59:28.919704 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 13:59:28.919856 1969949 main.go:141] libmachine: Using SSH client type: native
	I0120 13:59:28.920067 1969949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0120 13:59:28.920081 1969949 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-648067 && echo "no-preload-648067" | sudo tee /etc/hostname
	I0120 13:59:29.047973 1969949 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-648067
	
	I0120 13:59:29.048016 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 13:59:29.051307 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:29.051653 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 13:59:29.051684 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:29.051948 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 13:59:29.052164 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 13:59:29.052369 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 13:59:29.052540 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 13:59:29.052719 1969949 main.go:141] libmachine: Using SSH client type: native
	I0120 13:59:29.052903 1969949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0120 13:59:29.052919 1969949 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-648067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-648067/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-648067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 13:59:29.173164 1969949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 13:59:29.173202 1969949 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 13:59:29.173271 1969949 buildroot.go:174] setting up certificates
	I0120 13:59:29.173283 1969949 provision.go:84] configureAuth start
	I0120 13:59:29.173302 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetMachineName
	I0120 13:59:29.173677 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetIP
	I0120 13:59:29.176836 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:29.177367 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 13:59:29.177393 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:29.177622 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 13:59:29.179818 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:29.180226 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 13:59:29.180268 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:29.180414 1969949 provision.go:143] copyHostCerts
	I0120 13:59:29.180487 1969949 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem, removing ...
	I0120 13:59:29.180509 1969949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem
	I0120 13:59:29.180578 1969949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 13:59:29.180668 1969949 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem, removing ...
	I0120 13:59:29.180676 1969949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem
	I0120 13:59:29.180700 1969949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 13:59:29.180751 1969949 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem, removing ...
	I0120 13:59:29.180758 1969949 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem
	I0120 13:59:29.180785 1969949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 13:59:29.180830 1969949 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.no-preload-648067 san=[127.0.0.1 192.168.39.76 localhost minikube no-preload-648067]
	I0120 13:59:29.427191 1969949 provision.go:177] copyRemoteCerts
	I0120 13:59:29.427255 1969949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 13:59:29.427291 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 13:59:29.430203 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:29.430525 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 13:59:29.430553 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:29.430843 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 13:59:29.431071 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 13:59:29.431252 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 13:59:29.431388 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 13:59:29.517140 1969949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 13:59:29.544239 1969949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 13:59:29.570141 1969949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0120 13:59:29.595556 1969949 provision.go:87] duration metric: took 422.254849ms to configureAuth
	I0120 13:59:29.595586 1969949 buildroot.go:189] setting minikube options for container-runtime
	I0120 13:59:29.595756 1969949 config.go:182] Loaded profile config "no-preload-648067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 13:59:29.595859 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 13:59:29.599011 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:29.599466 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 13:59:29.599502 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:29.599734 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 13:59:29.599957 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 13:59:29.600135 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 13:59:29.600354 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 13:59:29.600563 1969949 main.go:141] libmachine: Using SSH client type: native
	I0120 13:59:29.600731 1969949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0120 13:59:29.600746 1969949 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 13:59:29.842672 1969949 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 13:59:29.842703 1969949 machine.go:96] duration metric: took 1.042884208s to provisionDockerMachine
	I0120 13:59:29.842718 1969949 start.go:293] postStartSetup for "no-preload-648067" (driver="kvm2")
	I0120 13:59:29.842733 1969949 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 13:59:29.842762 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 13:59:29.843133 1969949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 13:59:29.843174 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 13:59:29.846690 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:29.847171 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 13:59:29.847211 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:29.847411 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 13:59:29.847583 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 13:59:29.847829 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 13:59:29.848031 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 13:59:29.937315 1969949 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 13:59:29.941801 1969949 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 13:59:29.941831 1969949 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 13:59:29.941910 1969949 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 13:59:29.942014 1969949 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem -> 19276722.pem in /etc/ssl/certs
	I0120 13:59:29.942161 1969949 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 13:59:29.952248 1969949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 13:59:29.978029 1969949 start.go:296] duration metric: took 135.290166ms for postStartSetup
	I0120 13:59:29.978085 1969949 fix.go:56] duration metric: took 19.675363075s for fixHost
	I0120 13:59:29.978109 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 13:59:29.980925 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:29.981329 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 13:59:29.981367 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:29.981487 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 13:59:29.981744 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 13:59:29.981927 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 13:59:29.982150 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 13:59:29.982333 1969949 main.go:141] libmachine: Using SSH client type: native
	I0120 13:59:29.982579 1969949 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0120 13:59:29.982593 1969949 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 13:59:30.096307 1969949 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381570.065960187
	
	I0120 13:59:30.096331 1969949 fix.go:216] guest clock: 1737381570.065960187
	I0120 13:59:30.096338 1969949 fix.go:229] Guest: 2025-01-20 13:59:30.065960187 +0000 UTC Remote: 2025-01-20 13:59:29.978089855 +0000 UTC m=+19.837448884 (delta=87.870332ms)
	I0120 13:59:30.096370 1969949 fix.go:200] guest clock delta is within tolerance: 87.870332ms
	I0120 13:59:30.096376 1969949 start.go:83] releasing machines lock for "no-preload-648067", held for 19.793669314s
	I0120 13:59:30.096394 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 13:59:30.096759 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetIP
	I0120 13:59:30.099650 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:30.100082 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 13:59:30.100115 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:30.100314 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 13:59:30.100808 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 13:59:30.100984 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 13:59:30.101079 1969949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 13:59:30.101132 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 13:59:30.101182 1969949 ssh_runner.go:195] Run: cat /version.json
	I0120 13:59:30.101209 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 13:59:30.103647 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:30.103674 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:30.104010 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 13:59:30.104061 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:30.104086 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 13:59:30.104102 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:30.104199 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 13:59:30.104314 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 13:59:30.104395 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 13:59:30.104455 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 13:59:30.104537 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 13:59:30.104594 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 13:59:30.104653 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 13:59:30.104723 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 13:59:30.207948 1969949 ssh_runner.go:195] Run: systemctl --version
	I0120 13:59:30.214677 1969949 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 13:59:30.363779 1969949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 13:59:30.372547 1969949 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 13:59:30.372650 1969949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 13:59:30.389362 1969949 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 13:59:30.389401 1969949 start.go:495] detecting cgroup driver to use...
	I0120 13:59:30.389487 1969949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 13:59:30.407250 1969949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 13:59:30.422478 1969949 docker.go:217] disabling cri-docker service (if available) ...
	I0120 13:59:30.422550 1969949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 13:59:30.437054 1969949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 13:59:30.451457 1969949 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 13:59:30.572343 1969949 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 13:59:30.752286 1969949 docker.go:233] disabling docker service ...
	I0120 13:59:30.752351 1969949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 13:59:30.767795 1969949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 13:59:30.781831 1969949 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 13:59:30.909429 1969949 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 13:59:31.033344 1969949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 13:59:31.049031 1969949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 13:59:31.071134 1969949 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 13:59:31.071212 1969949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:59:31.083119 1969949 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 13:59:31.083325 1969949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:59:31.095583 1969949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:59:31.107567 1969949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:59:31.118955 1969949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 13:59:31.130916 1969949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:59:31.142421 1969949 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:59:31.162249 1969949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 13:59:31.175196 1969949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 13:59:31.186650 1969949 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 13:59:31.186724 1969949 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 13:59:31.202985 1969949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 13:59:31.214019 1969949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 13:59:31.331675 1969949 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 13:59:31.437359 1969949 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 13:59:31.437443 1969949 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 13:59:31.442314 1969949 start.go:563] Will wait 60s for crictl version
	I0120 13:59:31.442365 1969949 ssh_runner.go:195] Run: which crictl
	I0120 13:59:31.446589 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 13:59:31.486956 1969949 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 13:59:31.487071 1969949 ssh_runner.go:195] Run: crio --version
	I0120 13:59:31.517240 1969949 ssh_runner.go:195] Run: crio --version
	I0120 13:59:31.552102 1969949 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 13:59:31.553604 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetIP
	I0120 13:59:31.556994 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:31.557398 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 13:59:31.557422 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 13:59:31.557729 1969949 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0120 13:59:31.562365 1969949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 13:59:31.576215 1969949 kubeadm.go:883] updating cluster {Name:no-preload-648067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-648067 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 13:59:31.576347 1969949 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 13:59:31.576392 1969949 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 13:59:31.613177 1969949 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 13:59:31.613207 1969949 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.0 registry.k8s.io/kube-controller-manager:v1.32.0 registry.k8s.io/kube-scheduler:v1.32.0 registry.k8s.io/kube-proxy:v1.32.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 13:59:31.613255 1969949 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:59:31.613305 1969949 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.0
	I0120 13:59:31.613363 1969949 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0120 13:59:31.613388 1969949 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.0
	I0120 13:59:31.613401 1969949 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0120 13:59:31.613429 1969949 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.0
	I0120 13:59:31.613368 1969949 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 13:59:31.613598 1969949 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0120 13:59:31.614862 1969949 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:59:31.614872 1969949 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 13:59:31.614882 1969949 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0120 13:59:31.614884 1969949 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0120 13:59:31.614905 1969949 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0120 13:59:31.614915 1969949 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.0
	I0120 13:59:31.614916 1969949 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.0
	I0120 13:59:31.615005 1969949 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.0
	I0120 13:59:31.784696 1969949 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 13:59:31.786869 1969949 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I0120 13:59:31.803926 1969949 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.0
	I0120 13:59:31.821353 1969949 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.0
	I0120 13:59:31.822784 1969949 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0120 13:59:31.845132 1969949 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0120 13:59:31.860825 1969949 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.0" does not exist at hash "8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3" in container runtime
	I0120 13:59:31.860896 1969949 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 13:59:31.860833 1969949 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I0120 13:59:31.860951 1969949 ssh_runner.go:195] Run: which crictl
	I0120 13:59:31.860993 1969949 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I0120 13:59:31.861050 1969949 ssh_runner.go:195] Run: which crictl
	I0120 13:59:31.873957 1969949 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.0
	I0120 13:59:31.943934 1969949 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.0" does not exist at hash "c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4" in container runtime
	I0120 13:59:31.943982 1969949 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.0
	I0120 13:59:31.944037 1969949 ssh_runner.go:195] Run: which crictl
	I0120 13:59:31.970598 1969949 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0120 13:59:31.970666 1969949 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.0" does not exist at hash "a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5" in container runtime
	I0120 13:59:31.970679 1969949 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0120 13:59:31.970692 1969949 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.0
	I0120 13:59:31.970739 1969949 ssh_runner.go:195] Run: which crictl
	I0120 13:59:31.970740 1969949 ssh_runner.go:195] Run: which crictl
	I0120 13:59:32.029082 1969949 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:59:32.101425 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 13:59:32.101462 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0120 13:59:32.101547 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I0120 13:59:32.101542 1969949 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.0" needs transfer: "registry.k8s.io/kube-proxy:v1.32.0" does not exist at hash "040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08" in container runtime
	I0120 13:59:32.101613 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0120 13:59:32.101671 1969949 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0120 13:59:32.101692 1969949 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:59:32.101616 1969949 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.0
	I0120 13:59:32.101719 1969949 ssh_runner.go:195] Run: which crictl
	I0120 13:59:32.101727 1969949 ssh_runner.go:195] Run: which crictl
	I0120 13:59:32.101634 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I0120 13:59:32.213883 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I0120 13:59:32.218347 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I0120 13:59:32.218410 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 13:59:32.218410 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0120 13:59:32.218466 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:59:32.218522 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0120 13:59:32.218550 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I0120 13:59:32.326587 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I0120 13:59:32.388281 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I0120 13:59:32.388297 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I0120 13:59:32.405511 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:59:32.405539 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I0120 13:59:32.405562 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0120 13:59:32.405700 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0120 13:59:32.484204 1969949 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0
	I0120 13:59:32.484345 1969949 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.0
	I0120 13:59:32.540023 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I0120 13:59:32.540038 1969949 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0
	I0120 13:59:32.540186 1969949 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I0120 13:59:32.563625 1969949 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0
	I0120 13:59:32.563746 1969949 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 13:59:32.563805 1969949 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0120 13:59:32.563754 1969949 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.0
	I0120 13:59:32.563757 1969949 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0120 13:59:32.563974 1969949 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I0120 13:59:32.563840 1969949 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.32.0 (exists)
	I0120 13:59:32.564053 1969949 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.0
	I0120 13:59:32.564102 1969949 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0
	I0120 13:59:32.563867 1969949 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0120 13:59:32.603853 1969949 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.32.0 (exists)
	I0120 13:59:32.603975 1969949 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0
	I0120 13:59:32.604139 1969949 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0
	I0120 13:59:32.642106 1969949 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0120 13:59:32.642127 1969949 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.32.0 (exists)
	I0120 13:59:32.642166 1969949 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.16-0 (exists)
	I0120 13:59:32.642234 1969949 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0120 13:59:35.369783 1969949 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0: (2.805651402s)
	I0120 13:59:35.369831 1969949 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 from cache
	I0120 13:59:35.369853 1969949 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: (2.805690042s)
	I0120 13:59:35.369900 1969949 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0120 13:59:35.369871 1969949 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I0120 13:59:35.369913 1969949 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0: (2.765749958s)
	I0120 13:59:35.369943 1969949 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.32.0 (exists)
	I0120 13:59:35.369964 1969949 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I0120 13:59:35.369975 1969949 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.727720455s)
	I0120 13:59:35.369999 1969949 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0120 13:59:37.349107 1969949 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0: (1.979109838s)
	I0120 13:59:37.349147 1969949 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 from cache
	I0120 13:59:37.349189 1969949 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.0
	I0120 13:59:37.349266 1969949 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0
	I0120 13:59:39.000928 1969949 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0: (1.65162818s)
	I0120 13:59:39.000979 1969949 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 from cache
	I0120 13:59:39.001059 1969949 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I0120 13:59:39.001127 1969949 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I0120 13:59:42.785855 1969949 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0: (3.784679945s)
	I0120 13:59:42.785917 1969949 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I0120 13:59:42.785967 1969949 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0120 13:59:42.786055 1969949 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0120 13:59:44.746956 1969949 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.960869554s)
	I0120 13:59:44.746992 1969949 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0120 13:59:44.747061 1969949 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.0
	I0120 13:59:44.747125 1969949 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0
	I0120 13:59:46.913891 1969949 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0: (2.166734462s)
	I0120 13:59:46.913925 1969949 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 from cache
	I0120 13:59:46.913964 1969949 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0120 13:59:46.914043 1969949 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0120 13:59:47.864831 1969949 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0120 13:59:47.864893 1969949 cache_images.go:123] Successfully loaded all cached images
	I0120 13:59:47.864902 1969949 cache_images.go:92] duration metric: took 16.251678221s to LoadCachedImages
	I0120 13:59:47.864919 1969949 kubeadm.go:934] updating node { 192.168.39.76 8443 v1.32.0 crio true true} ...
	I0120 13:59:47.865080 1969949 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-648067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:no-preload-648067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 13:59:47.865176 1969949 ssh_runner.go:195] Run: crio config
	I0120 13:59:47.913868 1969949 cni.go:84] Creating CNI manager for ""
	I0120 13:59:47.913898 1969949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 13:59:47.913911 1969949 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 13:59:47.913937 1969949 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.76 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-648067 NodeName:no-preload-648067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 13:59:47.914095 1969949 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-648067"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.76"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.76"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 13:59:47.914187 1969949 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 13:59:47.924799 1969949 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 13:59:47.924866 1969949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 13:59:47.934557 1969949 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0120 13:59:47.951844 1969949 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 13:59:47.969663 1969949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0120 13:59:47.989413 1969949 ssh_runner.go:195] Run: grep 192.168.39.76	control-plane.minikube.internal$ /etc/hosts
	I0120 13:59:47.995083 1969949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 13:59:48.010799 1969949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 13:59:48.144308 1969949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 13:59:48.162681 1969949 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067 for IP: 192.168.39.76
	I0120 13:59:48.162709 1969949 certs.go:194] generating shared ca certs ...
	I0120 13:59:48.162732 1969949 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:59:48.162906 1969949 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 13:59:48.162978 1969949 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 13:59:48.162990 1969949 certs.go:256] generating profile certs ...
	I0120 13:59:48.163134 1969949 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/client.key
	I0120 13:59:48.163222 1969949 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/apiserver.key.ef00cbc1
	I0120 13:59:48.163280 1969949 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/proxy-client.key
	I0120 13:59:48.163432 1969949 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem (1338 bytes)
	W0120 13:59:48.163482 1969949 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672_empty.pem, impossibly tiny 0 bytes
	I0120 13:59:48.163495 1969949 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 13:59:48.163524 1969949 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 13:59:48.163560 1969949 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 13:59:48.163590 1969949 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 13:59:48.163642 1969949 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 13:59:48.164494 1969949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 13:59:48.203019 1969949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 13:59:48.240533 1969949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 13:59:48.274111 1969949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 13:59:48.312130 1969949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0120 13:59:48.349049 1969949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 13:59:48.378307 1969949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 13:59:48.408910 1969949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 13:59:48.436077 1969949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 13:59:48.463613 1969949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem --> /usr/share/ca-certificates/1927672.pem (1338 bytes)
	I0120 13:59:48.489743 1969949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /usr/share/ca-certificates/19276722.pem (1708 bytes)
	I0120 13:59:48.516206 1969949 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 13:59:48.533995 1969949 ssh_runner.go:195] Run: openssl version
	I0120 13:59:48.540248 1969949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 13:59:48.552143 1969949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:59:48.556969 1969949 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:59:48.557052 1969949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 13:59:48.563306 1969949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 13:59:48.575446 1969949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1927672.pem && ln -fs /usr/share/ca-certificates/1927672.pem /etc/ssl/certs/1927672.pem"
	I0120 13:59:48.587464 1969949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1927672.pem
	I0120 13:59:48.592411 1969949 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:58 /usr/share/ca-certificates/1927672.pem
	I0120 13:59:48.592490 1969949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1927672.pem
	I0120 13:59:48.598547 1969949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1927672.pem /etc/ssl/certs/51391683.0"
	I0120 13:59:48.610219 1969949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19276722.pem && ln -fs /usr/share/ca-certificates/19276722.pem /etc/ssl/certs/19276722.pem"
	I0120 13:59:48.621648 1969949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19276722.pem
	I0120 13:59:48.626255 1969949 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:58 /usr/share/ca-certificates/19276722.pem
	I0120 13:59:48.626332 1969949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19276722.pem
	I0120 13:59:48.632213 1969949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19276722.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 13:59:48.643879 1969949 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 13:59:48.649092 1969949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 13:59:48.655831 1969949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 13:59:48.662388 1969949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 13:59:48.669030 1969949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 13:59:48.675555 1969949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 13:59:48.682187 1969949 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 13:59:48.688810 1969949 kubeadm.go:392] StartCluster: {Name:no-preload-648067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:no-preload-648067 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:59:48.688919 1969949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 13:59:48.688978 1969949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 13:59:48.741182 1969949 cri.go:89] found id: ""
	I0120 13:59:48.741266 1969949 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 13:59:48.755138 1969949 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 13:59:48.755161 1969949 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 13:59:48.755227 1969949 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 13:59:48.768483 1969949 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 13:59:48.769433 1969949 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-648067" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 13:59:48.770020 1969949 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-1920423/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-648067" cluster setting kubeconfig missing "no-preload-648067" context setting]
	I0120 13:59:48.770887 1969949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 13:59:48.772577 1969949 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 13:59:48.783377 1969949 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.76
	I0120 13:59:48.783419 1969949 kubeadm.go:1160] stopping kube-system containers ...
	I0120 13:59:48.783436 1969949 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 13:59:48.783499 1969949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 13:59:48.820882 1969949 cri.go:89] found id: ""
	I0120 13:59:48.820997 1969949 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 13:59:48.840058 1969949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 13:59:48.851931 1969949 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 13:59:48.851958 1969949 kubeadm.go:157] found existing configuration files:
	
	I0120 13:59:48.852009 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 13:59:48.862906 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 13:59:48.862980 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 13:59:48.874374 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 13:59:48.888055 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 13:59:48.888132 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 13:59:48.902499 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 13:59:48.914836 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 13:59:48.914898 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 13:59:48.927592 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 13:59:48.941760 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 13:59:48.941832 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 13:59:48.953911 1969949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 13:59:48.968091 1969949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 13:59:49.105039 1969949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 13:59:49.802698 1969949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 13:59:50.017097 1969949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 13:59:50.080221 1969949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 13:59:50.162776 1969949 api_server.go:52] waiting for apiserver process to appear ...
	I0120 13:59:50.162867 1969949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 13:59:50.663686 1969949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 13:59:51.163149 1969949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 13:59:51.663168 1969949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 13:59:51.679188 1969949 api_server.go:72] duration metric: took 1.516412242s to wait for apiserver process to appear ...
	I0120 13:59:51.679229 1969949 api_server.go:88] waiting for apiserver healthz status ...
	I0120 13:59:51.679282 1969949 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0120 13:59:54.475921 1969949 api_server.go:279] https://192.168.39.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 13:59:54.475953 1969949 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 13:59:54.475970 1969949 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0120 13:59:54.497961 1969949 api_server.go:279] https://192.168.39.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 13:59:54.497990 1969949 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 13:59:54.679322 1969949 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0120 13:59:54.687153 1969949 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 13:59:54.687187 1969949 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 13:59:55.179756 1969949 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0120 13:59:55.186452 1969949 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 13:59:55.186488 1969949 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 13:59:55.680079 1969949 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0120 13:59:55.690164 1969949 api_server.go:279] https://192.168.39.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 13:59:55.690200 1969949 api_server.go:103] status: https://192.168.39.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 13:59:56.179707 1969949 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0120 13:59:56.185403 1969949 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0120 13:59:56.194075 1969949 api_server.go:141] control plane version: v1.32.0
	I0120 13:59:56.194121 1969949 api_server.go:131] duration metric: took 4.514883261s to wait for apiserver health ...
	I0120 13:59:56.194134 1969949 cni.go:84] Creating CNI manager for ""
	I0120 13:59:56.194144 1969949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 13:59:56.196078 1969949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 13:59:56.197384 1969949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 13:59:56.214428 1969949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 13:59:56.260826 1969949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 13:59:56.276629 1969949 system_pods.go:59] 8 kube-system pods found
	I0120 13:59:56.276677 1969949 system_pods.go:61] "coredns-668d6bf9bc-cjkzg" [e2cbed0d-5bab-49de-b780-19e54a8278c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 13:59:56.276685 1969949 system_pods.go:61] "etcd-no-preload-648067" [6739eada-47c1-4244-b13b-94d6c33eb0fc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 13:59:56.276694 1969949 system_pods.go:61] "kube-apiserver-no-preload-648067" [b658148e-4441-49f8-948a-c8c5027058b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 13:59:56.276700 1969949 system_pods.go:61] "kube-controller-manager-no-preload-648067" [2b89cff9-00ae-4946-bb73-fd7df2915670] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0120 13:59:56.276706 1969949 system_pods.go:61] "kube-proxy-wpxqm" [4f64cbfe-92ac-4b6f-b162-41aee74adff7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0120 13:59:56.276710 1969949 system_pods.go:61] "kube-scheduler-no-preload-648067" [ca0bf8d2-b0aa-4fbd-aaf6-c1953868d749] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0120 13:59:56.276716 1969949 system_pods.go:61] "metrics-server-f79f97bbb-bp4mx" [308a880f-a6ee-425d-97f3-57b35c51f4b7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 13:59:56.276721 1969949 system_pods.go:61] "storage-provisioner" [8fe7a1cb-ae4d-42f8-b216-3ffbe7277b58] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 13:59:56.276728 1969949 system_pods.go:74] duration metric: took 15.872988ms to wait for pod list to return data ...
	I0120 13:59:56.276737 1969949 node_conditions.go:102] verifying NodePressure condition ...
	I0120 13:59:56.280596 1969949 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 13:59:56.280627 1969949 node_conditions.go:123] node cpu capacity is 2
	I0120 13:59:56.280642 1969949 node_conditions.go:105] duration metric: took 3.900412ms to run NodePressure ...
	I0120 13:59:56.280658 1969949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 13:59:56.619869 1969949 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0120 13:59:56.630949 1969949 kubeadm.go:739] kubelet initialised
	I0120 13:59:56.630982 1969949 kubeadm.go:740] duration metric: took 11.070937ms waiting for restarted kubelet to initialise ...
	I0120 13:59:56.630993 1969949 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 13:59:56.636058 1969949 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-cjkzg" in "kube-system" namespace to be "Ready" ...
	I0120 13:59:58.642514 1969949 pod_ready.go:103] pod "coredns-668d6bf9bc-cjkzg" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:00.642664 1969949 pod_ready.go:103] pod "coredns-668d6bf9bc-cjkzg" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:02.642487 1969949 pod_ready.go:93] pod "coredns-668d6bf9bc-cjkzg" in "kube-system" namespace has status "Ready":"True"
	I0120 14:00:02.642515 1969949 pod_ready.go:82] duration metric: took 6.006427414s for pod "coredns-668d6bf9bc-cjkzg" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:02.642526 1969949 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:04.651628 1969949 pod_ready.go:103] pod "etcd-no-preload-648067" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:06.652107 1969949 pod_ready.go:103] pod "etcd-no-preload-648067" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:09.150752 1969949 pod_ready.go:103] pod "etcd-no-preload-648067" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:11.149853 1969949 pod_ready.go:93] pod "etcd-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:00:11.149901 1969949 pod_ready.go:82] duration metric: took 8.507369662s for pod "etcd-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:11.149913 1969949 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:11.157709 1969949 pod_ready.go:93] pod "kube-apiserver-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:00:11.157740 1969949 pod_ready.go:82] duration metric: took 7.820417ms for pod "kube-apiserver-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:11.157752 1969949 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:11.164494 1969949 pod_ready.go:93] pod "kube-controller-manager-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:00:11.164526 1969949 pod_ready.go:82] duration metric: took 6.765624ms for pod "kube-controller-manager-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:11.164540 1969949 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wpxqm" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:11.172352 1969949 pod_ready.go:93] pod "kube-proxy-wpxqm" in "kube-system" namespace has status "Ready":"True"
	I0120 14:00:11.172386 1969949 pod_ready.go:82] duration metric: took 7.836393ms for pod "kube-proxy-wpxqm" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:11.172402 1969949 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:11.178205 1969949 pod_ready.go:93] pod "kube-scheduler-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:00:11.178235 1969949 pod_ready.go:82] duration metric: took 5.822924ms for pod "kube-scheduler-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:11.178250 1969949 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:13.187943 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:15.685347 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:17.685872 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:20.187364 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:22.189987 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:24.686036 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:27.187725 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:29.686055 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:31.686932 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:34.185985 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:36.489368 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:38.685088 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:40.685607 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:42.686457 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:45.185009 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:47.186651 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:49.684461 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:51.685040 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:53.685810 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:56.188754 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:58.685540 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:00.685774 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:02.686004 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:05.186979 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:07.684630 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:09.685670 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:11.686111 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:13.686796 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:16.186055 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:18.186345 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:20.685722 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:23.185694 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:25.685892 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:28.186046 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:30.186915 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:32.686925 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:35.185223 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:37.187266 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:39.686105 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:42.185136 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:44.686564 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:47.185914 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:49.187141 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:51.189623 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:53.687386 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:55.714950 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:58.186438 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:00.689940 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:03.185114 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:05.188225 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:07.685349 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:10.186075 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:12.186380 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:14.187082 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:16.687293 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:18.717798 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:21.185126 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:23.186698 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:25.686765 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:28.186774 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:30.685246 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:33.195660 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:35.685341 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:37.685683 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:40.185648 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:42.185876 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:44.685996 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:46.686210 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:49.185352 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:51.685546 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:53.685814 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:55.686206 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:58.186818 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:00.685031 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:03.185220 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:05.684845 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:07.685532 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:10.188443 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:12.684802 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:14.685044 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:16.686668 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:19.185833 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:21.685011 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:24.185145 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:26.186599 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:28.684816 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:30.686778 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:33.184968 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:35.185398 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:37.685015 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:39.689385 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:42.189707 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:44.686641 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:47.184802 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:49.184874 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:51.185101 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:53.186284 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:55.186474 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:57.186658 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:59.686959 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:02.185020 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:04.185796 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:06.188401 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:08.685683 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:11.178584 1969949 pod_ready.go:82] duration metric: took 4m0.000311545s for pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace to be "Ready" ...
	E0120 14:04:11.178646 1969949 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0120 14:04:11.178676 1969949 pod_ready.go:39] duration metric: took 4m14.547669609s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:04:11.178719 1969949 kubeadm.go:597] duration metric: took 4m22.42355041s to restartPrimaryControlPlane
	W0120 14:04:11.178845 1969949 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:04:11.178885 1969949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:04:39.032716 1969949 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.853801532s)
	I0120 14:04:39.032805 1969949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:04:39.056153 1969949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:04:39.077937 1969949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:04:39.097957 1969949 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:04:39.097986 1969949 kubeadm.go:157] found existing configuration files:
	
	I0120 14:04:39.098074 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:04:39.127178 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:04:39.127249 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:04:39.140640 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:04:39.152447 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:04:39.152516 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:04:39.174543 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:04:39.185436 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:04:39.185521 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:04:39.196720 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:04:39.207028 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:04:39.207105 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:04:39.217474 1969949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:04:39.273124 1969949 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 14:04:39.273208 1969949 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:04:39.402646 1969949 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:04:39.402821 1969949 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:04:39.402964 1969949 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 14:04:39.411696 1969949 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:04:39.413689 1969949 out.go:235]   - Generating certificates and keys ...
	I0120 14:04:39.413807 1969949 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:04:39.413895 1969949 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:04:39.414021 1969949 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:04:39.414131 1969949 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:04:39.414240 1969949 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:04:39.414333 1969949 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:04:39.414455 1969949 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:04:39.414538 1969949 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:04:39.414693 1969949 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:04:39.414814 1969949 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:04:39.414881 1969949 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:04:39.414976 1969949 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:04:39.516867 1969949 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:04:39.700148 1969949 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 14:04:39.838568 1969949 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:04:40.020807 1969949 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:04:40.083569 1969949 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:04:40.083953 1969949 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:04:40.086599 1969949 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:04:40.088383 1969949 out.go:235]   - Booting up control plane ...
	I0120 14:04:40.088515 1969949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:04:40.090041 1969949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:04:40.092450 1969949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:04:40.114859 1969949 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:04:40.124692 1969949 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:04:40.124773 1969949 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:04:40.281534 1969949 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 14:04:40.281697 1969949 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 14:04:41.283107 1969949 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001641988s
	I0120 14:04:41.283223 1969949 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 14:04:46.784985 1969949 kubeadm.go:310] [api-check] The API server is healthy after 5.501686403s
	I0120 14:04:46.800497 1969949 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 14:04:46.826466 1969949 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 14:04:46.872907 1969949 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 14:04:46.873201 1969949 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-648067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 14:04:46.893113 1969949 kubeadm.go:310] [bootstrap-token] Using token: hll471.vkmzt8kk1d060cyb
	I0120 14:04:46.894672 1969949 out.go:235]   - Configuring RBAC rules ...
	I0120 14:04:46.894865 1969949 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 14:04:46.901221 1969949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 14:04:46.911875 1969949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 14:04:46.916856 1969949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 14:04:46.922245 1969949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 14:04:46.929769 1969949 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 14:04:47.194825 1969949 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 14:04:47.629977 1969949 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 14:04:48.194241 1969949 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 14:04:48.195072 1969949 kubeadm.go:310] 
	I0120 14:04:48.195176 1969949 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 14:04:48.195193 1969949 kubeadm.go:310] 
	I0120 14:04:48.195309 1969949 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 14:04:48.195319 1969949 kubeadm.go:310] 
	I0120 14:04:48.195353 1969949 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 14:04:48.195444 1969949 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 14:04:48.195583 1969949 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 14:04:48.195610 1969949 kubeadm.go:310] 
	I0120 14:04:48.195693 1969949 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 14:04:48.195705 1969949 kubeadm.go:310] 
	I0120 14:04:48.195767 1969949 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 14:04:48.195776 1969949 kubeadm.go:310] 
	I0120 14:04:48.195891 1969949 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 14:04:48.196003 1969949 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 14:04:48.196119 1969949 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 14:04:48.196143 1969949 kubeadm.go:310] 
	I0120 14:04:48.196264 1969949 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 14:04:48.196353 1969949 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 14:04:48.196374 1969949 kubeadm.go:310] 
	I0120 14:04:48.196486 1969949 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hll471.vkmzt8kk1d060cyb \
	I0120 14:04:48.196623 1969949 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 \
	I0120 14:04:48.196658 1969949 kubeadm.go:310] 	--control-plane 
	I0120 14:04:48.196668 1969949 kubeadm.go:310] 
	I0120 14:04:48.196788 1969949 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 14:04:48.196797 1969949 kubeadm.go:310] 
	I0120 14:04:48.196887 1969949 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hll471.vkmzt8kk1d060cyb \
	I0120 14:04:48.196999 1969949 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 
	I0120 14:04:48.198034 1969949 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:04:48.198074 1969949 cni.go:84] Creating CNI manager for ""
	I0120 14:04:48.198087 1969949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:04:48.199935 1969949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:04:48.201356 1969949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:04:48.213317 1969949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:04:48.232194 1969949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:04:48.232407 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-648067 minikube.k8s.io/updated_at=2025_01_20T14_04_48_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=no-preload-648067 minikube.k8s.io/primary=true
	I0120 14:04:48.232407 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:48.270777 1969949 ops.go:34] apiserver oom_adj: -16
	I0120 14:04:48.458517 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:48.959588 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:49.459308 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:49.958914 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:50.459078 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:50.958680 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:51.459194 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:51.958693 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:52.459624 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:52.569627 1969949 kubeadm.go:1113] duration metric: took 4.337296975s to wait for elevateKubeSystemPrivileges
	I0120 14:04:52.569667 1969949 kubeadm.go:394] duration metric: took 5m3.880867579s to StartCluster
	I0120 14:04:52.569696 1969949 settings.go:142] acquiring lock: {Name:mka51395421b81550a6a78e164d8aa21a9ede347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:04:52.569799 1969949 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:04:52.571249 1969949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:04:52.571569 1969949 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:04:52.571705 1969949 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:04:52.571794 1969949 addons.go:69] Setting storage-provisioner=true in profile "no-preload-648067"
	I0120 14:04:52.571819 1969949 addons.go:238] Setting addon storage-provisioner=true in "no-preload-648067"
	I0120 14:04:52.571819 1969949 addons.go:69] Setting default-storageclass=true in profile "no-preload-648067"
	W0120 14:04:52.571832 1969949 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:04:52.571833 1969949 addons.go:69] Setting metrics-server=true in profile "no-preload-648067"
	I0120 14:04:52.571850 1969949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-648067"
	I0120 14:04:52.571858 1969949 addons.go:238] Setting addon metrics-server=true in "no-preload-648067"
	W0120 14:04:52.571867 1969949 addons.go:247] addon metrics-server should already be in state true
	I0120 14:04:52.571861 1969949 addons.go:69] Setting dashboard=true in profile "no-preload-648067"
	I0120 14:04:52.571895 1969949 host.go:66] Checking if "no-preload-648067" exists ...
	I0120 14:04:52.571904 1969949 addons.go:238] Setting addon dashboard=true in "no-preload-648067"
	W0120 14:04:52.571919 1969949 addons.go:247] addon dashboard should already be in state true
	I0120 14:04:52.571873 1969949 host.go:66] Checking if "no-preload-648067" exists ...
	I0120 14:04:52.571957 1969949 host.go:66] Checking if "no-preload-648067" exists ...
	I0120 14:04:52.571816 1969949 config.go:182] Loaded profile config "no-preload-648067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:04:52.572249 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.572310 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.572388 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.572402 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.572429 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.572437 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.572388 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.572514 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.573278 1969949 out.go:177] * Verifying Kubernetes components...
	I0120 14:04:52.574697 1969949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:04:52.593445 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35109
	I0120 14:04:52.593972 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40497
	I0120 14:04:52.594196 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38403
	I0120 14:04:52.594251 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.594311 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0120 14:04:52.594456 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.594699 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.594819 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.595051 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.595058 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.595072 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.595075 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.595878 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.595883 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.595967 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.595978 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.595992 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.595994 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.596089 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.596460 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.596493 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.596495 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.596537 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.597392 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.597458 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.597937 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.597987 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.601273 1969949 addons.go:238] Setting addon default-storageclass=true in "no-preload-648067"
	W0120 14:04:52.601293 1969949 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:04:52.601328 1969949 host.go:66] Checking if "no-preload-648067" exists ...
	I0120 14:04:52.601665 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.601709 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.615800 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36643
	I0120 14:04:52.616400 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.617008 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.617030 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.617408 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.617522 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
	I0120 14:04:52.617864 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.618536 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.619193 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.619209 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.619284 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41705
	I0120 14:04:52.619647 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.619726 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.619909 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.620278 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.620296 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.620825 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.620943 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0120 14:04:52.621206 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.622123 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 14:04:52.622176 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.622220 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 14:04:52.623015 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 14:04:52.623665 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.623691 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.624470 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.625095 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.625143 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.625528 1969949 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:04:52.625540 1969949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:04:52.625550 1969949 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:04:52.627935 1969949 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:04:52.627964 1969949 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:04:52.627983 1969949 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:04:52.628010 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 14:04:52.628135 1969949 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:04:52.628150 1969949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:04:52.628172 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 14:04:52.629358 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:04:52.629377 1969949 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:04:52.629400 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 14:04:52.632446 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.633059 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.633123 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 14:04:52.633132 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 14:04:52.633166 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.633329 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 14:04:52.633372 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 14:04:52.633419 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.633507 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 14:04:52.633561 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 14:04:52.633761 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 14:04:52.634098 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.634129 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 14:04:52.634291 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 14:04:52.634635 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 14:04:52.634792 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 14:04:52.634816 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.635030 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 14:04:52.635288 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 14:04:52.635523 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 14:04:52.635673 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 14:04:52.649363 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46053
	I0120 14:04:52.649962 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.650624 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.650650 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.651046 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.651360 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.653362 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 14:04:52.653620 1969949 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:04:52.653637 1969949 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:04:52.653657 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 14:04:52.656950 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.657430 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 14:04:52.657459 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.657671 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 14:04:52.658472 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 14:04:52.658685 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 14:04:52.658860 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 14:04:52.827213 1969949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:04:52.892209 1969949 node_ready.go:35] waiting up to 6m0s for node "no-preload-648067" to be "Ready" ...
	I0120 14:04:52.927742 1969949 node_ready.go:49] node "no-preload-648067" has status "Ready":"True"
	I0120 14:04:52.927778 1969949 node_ready.go:38] duration metric: took 35.520382ms for node "no-preload-648067" to be "Ready" ...
	I0120 14:04:52.927792 1969949 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:04:52.945134 1969949 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace to be "Ready" ...
	I0120 14:04:52.998630 1969949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:04:53.015208 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:04:53.015251 1969949 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:04:53.050964 1969949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:04:53.053498 1969949 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:04:53.053531 1969949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:04:53.131884 1969949 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:04:53.131915 1969949 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:04:53.156697 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:04:53.156734 1969949 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:04:53.267300 1969949 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:04:53.267329 1969949 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:04:53.267739 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:04:53.267765 1969949 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:04:53.452299 1969949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:04:53.456705 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:53.456735 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:53.457124 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:53.457209 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:53.457135 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:53.457264 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:53.457356 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:53.457651 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:53.457667 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:53.461528 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:04:53.461555 1969949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:04:53.471471 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:53.471505 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:53.471848 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:53.471864 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:53.515363 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:04:53.515398 1969949 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 14:04:53.636963 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:04:53.637001 1969949 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:04:53.840979 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:04:53.841011 1969949 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:04:53.959045 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:04:53.959082 1969949 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 14:04:54.051582 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:04:54.051618 1969949 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 14:04:54.170664 1969949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:04:54.682801 1969949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.631779213s)
	I0120 14:04:54.682872 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:54.682887 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:54.683248 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:54.683271 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:54.683286 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:54.683296 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:54.683571 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:54.683595 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:54.683577 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:54.982997 1969949 pod_ready.go:103] pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:55.132956 1969949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.680599793s)
	I0120 14:04:55.133021 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:55.133038 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:55.133526 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:55.133549 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:55.133560 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:55.133568 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:55.133526 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:55.133807 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:55.133831 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:55.133847 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:55.133867 1969949 addons.go:479] Verifying addon metrics-server=true in "no-preload-648067"
	I0120 14:04:55.971683 1969949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.800920116s)
	I0120 14:04:55.971747 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:55.971763 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:55.972123 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:55.972144 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:55.972155 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:55.972163 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:55.972460 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:55.973844 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:55.973867 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:55.975729 1969949 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-648067 addons enable metrics-server
	
	I0120 14:04:55.977469 1969949 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 14:04:55.979014 1969949 addons.go:514] duration metric: took 3.407316682s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 14:04:57.451990 1969949 pod_ready.go:103] pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:59.452924 1969949 pod_ready.go:103] pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:01.452480 1969949 pod_ready.go:93] pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.452519 1969949 pod_ready.go:82] duration metric: took 8.507352286s for pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.452534 1969949 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-86xhz" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.458456 1969949 pod_ready.go:93] pod "coredns-668d6bf9bc-86xhz" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.458488 1969949 pod_ready.go:82] duration metric: took 5.941966ms for pod "coredns-668d6bf9bc-86xhz" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.458503 1969949 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.465708 1969949 pod_ready.go:93] pod "etcd-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.465733 1969949 pod_ready.go:82] duration metric: took 7.221959ms for pod "etcd-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.465745 1969949 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.473764 1969949 pod_ready.go:93] pod "kube-apiserver-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.473796 1969949 pod_ready.go:82] duration metric: took 8.041648ms for pod "kube-apiserver-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.473815 1969949 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.480463 1969949 pod_ready.go:93] pod "kube-controller-manager-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.480494 1969949 pod_ready.go:82] duration metric: took 6.670074ms for pod "kube-controller-manager-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.480508 1969949 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kr6tq" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.849787 1969949 pod_ready.go:93] pod "kube-proxy-kr6tq" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.849820 1969949 pod_ready.go:82] duration metric: took 369.302403ms for pod "kube-proxy-kr6tq" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.849834 1969949 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:02.250242 1969949 pod_ready.go:93] pod "kube-scheduler-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:02.250279 1969949 pod_ready.go:82] duration metric: took 400.436958ms for pod "kube-scheduler-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:02.250289 1969949 pod_ready.go:39] duration metric: took 9.322472589s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:02.250305 1969949 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:05:02.250373 1969949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:02.307690 1969949 api_server.go:72] duration metric: took 9.736077102s to wait for apiserver process to appear ...
	I0120 14:05:02.307725 1969949 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:05:02.307751 1969949 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0120 14:05:02.312837 1969949 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0120 14:05:02.314012 1969949 api_server.go:141] control plane version: v1.32.0
	I0120 14:05:02.314038 1969949 api_server.go:131] duration metric: took 6.305469ms to wait for apiserver health ...
	I0120 14:05:02.314047 1969949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:05:02.454048 1969949 system_pods.go:59] 9 kube-system pods found
	I0120 14:05:02.454092 1969949 system_pods.go:61] "coredns-668d6bf9bc-2fbd7" [d2cf52fe-b375-47cd-a4bf-d7ef8e07a4b7] Running
	I0120 14:05:02.454099 1969949 system_pods.go:61] "coredns-668d6bf9bc-86xhz" [4af72226-8186-40e7-a923-01381cc52731] Running
	I0120 14:05:02.454104 1969949 system_pods.go:61] "etcd-no-preload-648067" [87debb8b-80bc-41cc-91f3-7b905ab8177c] Running
	I0120 14:05:02.454109 1969949 system_pods.go:61] "kube-apiserver-no-preload-648067" [6b1f5f1b-67ae-4ab2-a186-1c5224fcbc4e] Running
	I0120 14:05:02.454114 1969949 system_pods.go:61] "kube-controller-manager-no-preload-648067" [1bf90869-71a8-4459-a1b8-b59f78af8a8b] Running
	I0120 14:05:02.454119 1969949 system_pods.go:61] "kube-proxy-kr6tq" [462ab3d1-c225-4319-bac8-926a1e43a14d] Running
	I0120 14:05:02.454125 1969949 system_pods.go:61] "kube-scheduler-no-preload-648067" [38edfe65-9c58-4a24-b108-c22846010b97] Running
	I0120 14:05:02.454136 1969949 system_pods.go:61] "metrics-server-f79f97bbb-9kb5f" [fb8dd9df-cd37-4779-af22-4abd91dbc421] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:05:02.454144 1969949 system_pods.go:61] "storage-provisioner" [12bde765-1258-4689-b448-64208dd30638] Running
	I0120 14:05:02.454158 1969949 system_pods.go:74] duration metric: took 140.103109ms to wait for pod list to return data ...
	I0120 14:05:02.454172 1969949 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:05:02.650007 1969949 default_sa.go:45] found service account: "default"
	I0120 14:05:02.650050 1969949 default_sa.go:55] duration metric: took 195.869128ms for default service account to be created ...
	I0120 14:05:02.650064 1969949 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:05:02.853144 1969949 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-648067 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-648067 -n no-preload-648067
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-648067 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-648067 logs -n 25: (1.626992814s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-038404                              | cert-expiration-038404       | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:57 UTC |
	| start   | -p embed-certs-647109                                  | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-648067             | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-648067                                   | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 13:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-955986 | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 13:58 UTC |
	|         | disable-driver-mounts-955986                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 13:59 UTC |
	|         | default-k8s-diff-port-727256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-647109            | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 13:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-647109                                  | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 14:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-648067                  | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC | 20 Jan 25 13:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-648067                                   | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-191446        | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-727256  | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC | 20 Jan 25 13:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC | 20 Jan 25 14:01 UTC |
	|         | default-k8s-diff-port-727256                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-647109                 | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 14:00 UTC | 20 Jan 25 14:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-647109                                  | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 14:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-191446                              | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC | 20 Jan 25 14:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-191446             | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC | 20 Jan 25 14:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-191446                              | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-727256       | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC | 20 Jan 25 14:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC |                     |
	|         | default-k8s-diff-port-727256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-191446                              | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:25 UTC | 20 Jan 25 14:25 UTC |
	| start   | -p newest-cni-345509 --memory=2200 --alsologtostderr   | newest-cni-345509            | jenkins | v1.35.0 | 20 Jan 25 14:25 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 14:25:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 14:25:24.156741 1977439 out.go:345] Setting OutFile to fd 1 ...
	I0120 14:25:24.156891 1977439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:25:24.156904 1977439 out.go:358] Setting ErrFile to fd 2...
	I0120 14:25:24.156911 1977439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:25:24.157107 1977439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 14:25:24.157750 1977439 out.go:352] Setting JSON to false
	I0120 14:25:24.158954 1977439 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22070,"bootTime":1737361054,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 14:25:24.159070 1977439 start.go:139] virtualization: kvm guest
	I0120 14:25:24.161394 1977439 out.go:177] * [newest-cni-345509] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 14:25:24.162971 1977439 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 14:25:24.163015 1977439 notify.go:220] Checking for updates...
	I0120 14:25:24.165212 1977439 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 14:25:24.166324 1977439 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:25:24.167545 1977439 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 14:25:24.168714 1977439 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 14:25:24.170040 1977439 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 14:25:24.171705 1977439 config.go:182] Loaded profile config "default-k8s-diff-port-727256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:25:24.171829 1977439 config.go:182] Loaded profile config "embed-certs-647109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:25:24.171937 1977439 config.go:182] Loaded profile config "no-preload-648067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:25:24.172090 1977439 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 14:25:24.213166 1977439 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 14:25:24.214705 1977439 start.go:297] selected driver: kvm2
	I0120 14:25:24.214732 1977439 start.go:901] validating driver "kvm2" against <nil>
	I0120 14:25:24.214750 1977439 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 14:25:24.215877 1977439 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:25:24.215975 1977439 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-1920423/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 14:25:24.233593 1977439 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 14:25:24.233651 1977439 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0120 14:25:24.233716 1977439 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0120 14:25:24.233992 1977439 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0120 14:25:24.234051 1977439 cni.go:84] Creating CNI manager for ""
	I0120 14:25:24.234133 1977439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:25:24.234146 1977439 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 14:25:24.234200 1977439 start.go:340] cluster config:
	{Name:newest-cni-345509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-345509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:25:24.234350 1977439 iso.go:125] acquiring lock: {Name:mk18c81b0efa5cc8efe9d47dc52684752f206ae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:25:24.236116 1977439 out.go:177] * Starting "newest-cni-345509" primary control-plane node in "newest-cni-345509" cluster
	I0120 14:25:24.237327 1977439 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 14:25:24.237396 1977439 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 14:25:24.237407 1977439 cache.go:56] Caching tarball of preloaded images
	I0120 14:25:24.237525 1977439 preload.go:172] Found /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 14:25:24.237540 1977439 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 14:25:24.237625 1977439 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/newest-cni-345509/config.json ...
	I0120 14:25:24.237643 1977439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/newest-cni-345509/config.json: {Name:mk736ece2f68a73874054eeb8aa501f59b31e6d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:25:24.237815 1977439 start.go:360] acquireMachinesLock for newest-cni-345509: {Name:mkbf148c6fec0c722eed081be3cef9d0990100c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 14:25:24.237856 1977439 start.go:364] duration metric: took 19.364µs to acquireMachinesLock for "newest-cni-345509"
	I0120 14:25:24.237896 1977439 start.go:93] Provisioning new machine with config: &{Name:newest-cni-345509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-345509
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:25:24.237992 1977439 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 14:25:24.239587 1977439 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0120 14:25:24.239735 1977439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:25:24.239799 1977439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:25:24.256778 1977439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I0120 14:25:24.257290 1977439 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:25:24.257864 1977439 main.go:141] libmachine: Using API Version  1
	I0120 14:25:24.257885 1977439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:25:24.258332 1977439 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:25:24.258560 1977439 main.go:141] libmachine: (newest-cni-345509) Calling .GetMachineName
	I0120 14:25:24.258768 1977439 main.go:141] libmachine: (newest-cni-345509) Calling .DriverName
	I0120 14:25:24.258917 1977439 start.go:159] libmachine.API.Create for "newest-cni-345509" (driver="kvm2")
	I0120 14:25:24.258950 1977439 client.go:168] LocalClient.Create starting
	I0120 14:25:24.259009 1977439 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem
	I0120 14:25:24.259057 1977439 main.go:141] libmachine: Decoding PEM data...
	I0120 14:25:24.259080 1977439 main.go:141] libmachine: Parsing certificate...
	I0120 14:25:24.259163 1977439 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem
	I0120 14:25:24.259193 1977439 main.go:141] libmachine: Decoding PEM data...
	I0120 14:25:24.259210 1977439 main.go:141] libmachine: Parsing certificate...
	I0120 14:25:24.259237 1977439 main.go:141] libmachine: Running pre-create checks...
	I0120 14:25:24.259247 1977439 main.go:141] libmachine: (newest-cni-345509) Calling .PreCreateCheck
	I0120 14:25:24.259611 1977439 main.go:141] libmachine: (newest-cni-345509) Calling .GetConfigRaw
	I0120 14:25:24.260079 1977439 main.go:141] libmachine: Creating machine...
	I0120 14:25:24.260094 1977439 main.go:141] libmachine: (newest-cni-345509) Calling .Create
	I0120 14:25:24.260268 1977439 main.go:141] libmachine: (newest-cni-345509) creating KVM machine...
	I0120 14:25:24.260286 1977439 main.go:141] libmachine: (newest-cni-345509) creating network...
	I0120 14:25:24.261736 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | found existing default KVM network
	I0120 14:25:24.263850 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:24.263635 1977462 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ef:b6:0e} reservation:<nil>}
	I0120 14:25:24.264698 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:24.264608 1977462 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:30:00:ce} reservation:<nil>}
	I0120 14:25:24.265862 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:24.265776 1977462 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ef2b0}
	I0120 14:25:24.265891 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | created network xml: 
	I0120 14:25:24.265898 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | <network>
	I0120 14:25:24.265907 1977439 main.go:141] libmachine: (newest-cni-345509) DBG |   <name>mk-newest-cni-345509</name>
	I0120 14:25:24.265915 1977439 main.go:141] libmachine: (newest-cni-345509) DBG |   <dns enable='no'/>
	I0120 14:25:24.265923 1977439 main.go:141] libmachine: (newest-cni-345509) DBG |   
	I0120 14:25:24.265929 1977439 main.go:141] libmachine: (newest-cni-345509) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0120 14:25:24.265953 1977439 main.go:141] libmachine: (newest-cni-345509) DBG |     <dhcp>
	I0120 14:25:24.265962 1977439 main.go:141] libmachine: (newest-cni-345509) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0120 14:25:24.265966 1977439 main.go:141] libmachine: (newest-cni-345509) DBG |     </dhcp>
	I0120 14:25:24.265972 1977439 main.go:141] libmachine: (newest-cni-345509) DBG |   </ip>
	I0120 14:25:24.265978 1977439 main.go:141] libmachine: (newest-cni-345509) DBG |   
	I0120 14:25:24.265996 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | </network>
	I0120 14:25:24.266008 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | 
	I0120 14:25:24.271484 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | trying to create private KVM network mk-newest-cni-345509 192.168.61.0/24...
	I0120 14:25:24.349977 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | private KVM network mk-newest-cni-345509 192.168.61.0/24 created
	I0120 14:25:24.350015 1977439 main.go:141] libmachine: (newest-cni-345509) setting up store path in /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/newest-cni-345509 ...
	I0120 14:25:24.350027 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:24.349968 1977462 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 14:25:24.350044 1977439 main.go:141] libmachine: (newest-cni-345509) building disk image from file:///home/jenkins/minikube-integration/20242-1920423/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 14:25:24.350132 1977439 main.go:141] libmachine: (newest-cni-345509) Downloading /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20242-1920423/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 14:25:24.702213 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:24.702063 1977462 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/newest-cni-345509/id_rsa...
	I0120 14:25:24.888224 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:24.888055 1977462 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/newest-cni-345509/newest-cni-345509.rawdisk...
	I0120 14:25:24.888258 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | Writing magic tar header
	I0120 14:25:24.888283 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | Writing SSH key tar header
	I0120 14:25:24.888297 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:24.888205 1977462 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/newest-cni-345509 ...
	I0120 14:25:24.888348 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/newest-cni-345509
	I0120 14:25:24.888383 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines
	I0120 14:25:24.888411 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 14:25:24.888474 1977439 main.go:141] libmachine: (newest-cni-345509) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/newest-cni-345509 (perms=drwx------)
	I0120 14:25:24.888502 1977439 main.go:141] libmachine: (newest-cni-345509) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423/.minikube/machines (perms=drwxr-xr-x)
	I0120 14:25:24.888519 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423
	I0120 14:25:24.888534 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 14:25:24.888546 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | checking permissions on dir: /home/jenkins
	I0120 14:25:24.888558 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | checking permissions on dir: /home
	I0120 14:25:24.888569 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | skipping /home - not owner
	I0120 14:25:24.888580 1977439 main.go:141] libmachine: (newest-cni-345509) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423/.minikube (perms=drwxr-xr-x)
	I0120 14:25:24.888597 1977439 main.go:141] libmachine: (newest-cni-345509) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423 (perms=drwxrwxr-x)
	I0120 14:25:24.888607 1977439 main.go:141] libmachine: (newest-cni-345509) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 14:25:24.888617 1977439 main.go:141] libmachine: (newest-cni-345509) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 14:25:24.888627 1977439 main.go:141] libmachine: (newest-cni-345509) creating domain...
	I0120 14:25:24.889885 1977439 main.go:141] libmachine: (newest-cni-345509) define libvirt domain using xml: 
	I0120 14:25:24.889908 1977439 main.go:141] libmachine: (newest-cni-345509) <domain type='kvm'>
	I0120 14:25:24.889916 1977439 main.go:141] libmachine: (newest-cni-345509)   <name>newest-cni-345509</name>
	I0120 14:25:24.889936 1977439 main.go:141] libmachine: (newest-cni-345509)   <memory unit='MiB'>2200</memory>
	I0120 14:25:24.889945 1977439 main.go:141] libmachine: (newest-cni-345509)   <vcpu>2</vcpu>
	I0120 14:25:24.889949 1977439 main.go:141] libmachine: (newest-cni-345509)   <features>
	I0120 14:25:24.889962 1977439 main.go:141] libmachine: (newest-cni-345509)     <acpi/>
	I0120 14:25:24.889977 1977439 main.go:141] libmachine: (newest-cni-345509)     <apic/>
	I0120 14:25:24.889989 1977439 main.go:141] libmachine: (newest-cni-345509)     <pae/>
	I0120 14:25:24.889997 1977439 main.go:141] libmachine: (newest-cni-345509)     
	I0120 14:25:24.890026 1977439 main.go:141] libmachine: (newest-cni-345509)   </features>
	I0120 14:25:24.890051 1977439 main.go:141] libmachine: (newest-cni-345509)   <cpu mode='host-passthrough'>
	I0120 14:25:24.890062 1977439 main.go:141] libmachine: (newest-cni-345509)   
	I0120 14:25:24.890068 1977439 main.go:141] libmachine: (newest-cni-345509)   </cpu>
	I0120 14:25:24.890085 1977439 main.go:141] libmachine: (newest-cni-345509)   <os>
	I0120 14:25:24.890095 1977439 main.go:141] libmachine: (newest-cni-345509)     <type>hvm</type>
	I0120 14:25:24.890103 1977439 main.go:141] libmachine: (newest-cni-345509)     <boot dev='cdrom'/>
	I0120 14:25:24.890113 1977439 main.go:141] libmachine: (newest-cni-345509)     <boot dev='hd'/>
	I0120 14:25:24.890122 1977439 main.go:141] libmachine: (newest-cni-345509)     <bootmenu enable='no'/>
	I0120 14:25:24.890133 1977439 main.go:141] libmachine: (newest-cni-345509)   </os>
	I0120 14:25:24.890155 1977439 main.go:141] libmachine: (newest-cni-345509)   <devices>
	I0120 14:25:24.890178 1977439 main.go:141] libmachine: (newest-cni-345509)     <disk type='file' device='cdrom'>
	I0120 14:25:24.890193 1977439 main.go:141] libmachine: (newest-cni-345509)       <source file='/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/newest-cni-345509/boot2docker.iso'/>
	I0120 14:25:24.890204 1977439 main.go:141] libmachine: (newest-cni-345509)       <target dev='hdc' bus='scsi'/>
	I0120 14:25:24.890223 1977439 main.go:141] libmachine: (newest-cni-345509)       <readonly/>
	I0120 14:25:24.890232 1977439 main.go:141] libmachine: (newest-cni-345509)     </disk>
	I0120 14:25:24.890242 1977439 main.go:141] libmachine: (newest-cni-345509)     <disk type='file' device='disk'>
	I0120 14:25:24.890262 1977439 main.go:141] libmachine: (newest-cni-345509)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 14:25:24.890279 1977439 main.go:141] libmachine: (newest-cni-345509)       <source file='/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/newest-cni-345509/newest-cni-345509.rawdisk'/>
	I0120 14:25:24.890290 1977439 main.go:141] libmachine: (newest-cni-345509)       <target dev='hda' bus='virtio'/>
	I0120 14:25:24.890297 1977439 main.go:141] libmachine: (newest-cni-345509)     </disk>
	I0120 14:25:24.890308 1977439 main.go:141] libmachine: (newest-cni-345509)     <interface type='network'>
	I0120 14:25:24.890318 1977439 main.go:141] libmachine: (newest-cni-345509)       <source network='mk-newest-cni-345509'/>
	I0120 14:25:24.890341 1977439 main.go:141] libmachine: (newest-cni-345509)       <model type='virtio'/>
	I0120 14:25:24.890349 1977439 main.go:141] libmachine: (newest-cni-345509)     </interface>
	I0120 14:25:24.890355 1977439 main.go:141] libmachine: (newest-cni-345509)     <interface type='network'>
	I0120 14:25:24.890373 1977439 main.go:141] libmachine: (newest-cni-345509)       <source network='default'/>
	I0120 14:25:24.890385 1977439 main.go:141] libmachine: (newest-cni-345509)       <model type='virtio'/>
	I0120 14:25:24.890394 1977439 main.go:141] libmachine: (newest-cni-345509)     </interface>
	I0120 14:25:24.890404 1977439 main.go:141] libmachine: (newest-cni-345509)     <serial type='pty'>
	I0120 14:25:24.890412 1977439 main.go:141] libmachine: (newest-cni-345509)       <target port='0'/>
	I0120 14:25:24.890420 1977439 main.go:141] libmachine: (newest-cni-345509)     </serial>
	I0120 14:25:24.890450 1977439 main.go:141] libmachine: (newest-cni-345509)     <console type='pty'>
	I0120 14:25:24.890467 1977439 main.go:141] libmachine: (newest-cni-345509)       <target type='serial' port='0'/>
	I0120 14:25:24.890477 1977439 main.go:141] libmachine: (newest-cni-345509)     </console>
	I0120 14:25:24.890484 1977439 main.go:141] libmachine: (newest-cni-345509)     <rng model='virtio'>
	I0120 14:25:24.890496 1977439 main.go:141] libmachine: (newest-cni-345509)       <backend model='random'>/dev/random</backend>
	I0120 14:25:24.890503 1977439 main.go:141] libmachine: (newest-cni-345509)     </rng>
	I0120 14:25:24.890511 1977439 main.go:141] libmachine: (newest-cni-345509)     
	I0120 14:25:24.890524 1977439 main.go:141] libmachine: (newest-cni-345509)     
	I0120 14:25:24.890537 1977439 main.go:141] libmachine: (newest-cni-345509)   </devices>
	I0120 14:25:24.890547 1977439 main.go:141] libmachine: (newest-cni-345509) </domain>
	I0120 14:25:24.890558 1977439 main.go:141] libmachine: (newest-cni-345509) 
	I0120 14:25:24.895288 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | domain newest-cni-345509 has defined MAC address 52:54:00:28:99:3e in network default
	I0120 14:25:24.895847 1977439 main.go:141] libmachine: (newest-cni-345509) starting domain...
	I0120 14:25:24.895869 1977439 main.go:141] libmachine: (newest-cni-345509) ensuring networks are active...
	I0120 14:25:24.895876 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | domain newest-cni-345509 has defined MAC address 52:54:00:90:e0:7e in network mk-newest-cni-345509
	I0120 14:25:24.896701 1977439 main.go:141] libmachine: (newest-cni-345509) Ensuring network default is active
	I0120 14:25:24.897006 1977439 main.go:141] libmachine: (newest-cni-345509) Ensuring network mk-newest-cni-345509 is active
	I0120 14:25:24.897486 1977439 main.go:141] libmachine: (newest-cni-345509) getting domain XML...
	I0120 14:25:24.898208 1977439 main.go:141] libmachine: (newest-cni-345509) creating domain...
	I0120 14:25:26.234126 1977439 main.go:141] libmachine: (newest-cni-345509) waiting for IP...
	I0120 14:25:26.235059 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | domain newest-cni-345509 has defined MAC address 52:54:00:90:e0:7e in network mk-newest-cni-345509
	I0120 14:25:26.235556 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | unable to find current IP address of domain newest-cni-345509 in network mk-newest-cni-345509
	I0120 14:25:26.235645 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:26.235571 1977462 retry.go:31] will retry after 245.264471ms: waiting for domain to come up
	I0120 14:25:26.482106 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | domain newest-cni-345509 has defined MAC address 52:54:00:90:e0:7e in network mk-newest-cni-345509
	I0120 14:25:26.482670 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | unable to find current IP address of domain newest-cni-345509 in network mk-newest-cni-345509
	I0120 14:25:26.482696 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:26.482636 1977462 retry.go:31] will retry after 311.757379ms: waiting for domain to come up
	I0120 14:25:26.796407 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | domain newest-cni-345509 has defined MAC address 52:54:00:90:e0:7e in network mk-newest-cni-345509
	I0120 14:25:26.796916 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | unable to find current IP address of domain newest-cni-345509 in network mk-newest-cni-345509
	I0120 14:25:26.796956 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:26.796867 1977462 retry.go:31] will retry after 334.660743ms: waiting for domain to come up
	I0120 14:25:27.133462 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | domain newest-cni-345509 has defined MAC address 52:54:00:90:e0:7e in network mk-newest-cni-345509
	I0120 14:25:27.134124 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | unable to find current IP address of domain newest-cni-345509 in network mk-newest-cni-345509
	I0120 14:25:27.134153 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:27.134097 1977462 retry.go:31] will retry after 461.633486ms: waiting for domain to come up
	I0120 14:25:27.597520 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | domain newest-cni-345509 has defined MAC address 52:54:00:90:e0:7e in network mk-newest-cni-345509
	I0120 14:25:27.598020 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | unable to find current IP address of domain newest-cni-345509 in network mk-newest-cni-345509
	I0120 14:25:27.598141 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:27.598024 1977462 retry.go:31] will retry after 593.824564ms: waiting for domain to come up
	I0120 14:25:28.194124 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | domain newest-cni-345509 has defined MAC address 52:54:00:90:e0:7e in network mk-newest-cni-345509
	I0120 14:25:28.194731 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | unable to find current IP address of domain newest-cni-345509 in network mk-newest-cni-345509
	I0120 14:25:28.194762 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:28.194685 1977462 retry.go:31] will retry after 619.785482ms: waiting for domain to come up
	I0120 14:25:28.816701 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | domain newest-cni-345509 has defined MAC address 52:54:00:90:e0:7e in network mk-newest-cni-345509
	I0120 14:25:28.817233 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | unable to find current IP address of domain newest-cni-345509 in network mk-newest-cni-345509
	I0120 14:25:28.817268 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:28.817203 1977462 retry.go:31] will retry after 859.970467ms: waiting for domain to come up
	I0120 14:25:29.678670 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | domain newest-cni-345509 has defined MAC address 52:54:00:90:e0:7e in network mk-newest-cni-345509
	I0120 14:25:29.679229 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | unable to find current IP address of domain newest-cni-345509 in network mk-newest-cni-345509
	I0120 14:25:29.679280 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:29.679201 1977462 retry.go:31] will retry after 1.44091067s: waiting for domain to come up
	I0120 14:25:31.121757 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | domain newest-cni-345509 has defined MAC address 52:54:00:90:e0:7e in network mk-newest-cni-345509
	I0120 14:25:31.122362 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | unable to find current IP address of domain newest-cni-345509 in network mk-newest-cni-345509
	I0120 14:25:31.122445 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:31.122326 1977462 retry.go:31] will retry after 1.34376779s: waiting for domain to come up
	I0120 14:25:32.467726 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | domain newest-cni-345509 has defined MAC address 52:54:00:90:e0:7e in network mk-newest-cni-345509
	I0120 14:25:32.468308 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | unable to find current IP address of domain newest-cni-345509 in network mk-newest-cni-345509
	I0120 14:25:32.468340 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:32.468274 1977462 retry.go:31] will retry after 1.789430075s: waiting for domain to come up
	I0120 14:25:34.259927 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | domain newest-cni-345509 has defined MAC address 52:54:00:90:e0:7e in network mk-newest-cni-345509
	I0120 14:25:34.260468 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | unable to find current IP address of domain newest-cni-345509 in network mk-newest-cni-345509
	I0120 14:25:34.260498 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:34.260437 1977462 retry.go:31] will retry after 2.589716121s: waiting for domain to come up
	I0120 14:25:36.851956 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | domain newest-cni-345509 has defined MAC address 52:54:00:90:e0:7e in network mk-newest-cni-345509
	I0120 14:25:36.852544 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | unable to find current IP address of domain newest-cni-345509 in network mk-newest-cni-345509
	I0120 14:25:36.852594 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:36.852510 1977462 retry.go:31] will retry after 2.811129064s: waiting for domain to come up
	I0120 14:25:39.664905 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | domain newest-cni-345509 has defined MAC address 52:54:00:90:e0:7e in network mk-newest-cni-345509
	I0120 14:25:39.665458 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | unable to find current IP address of domain newest-cni-345509 in network mk-newest-cni-345509
	I0120 14:25:39.665537 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:39.665430 1977462 retry.go:31] will retry after 4.297799917s: waiting for domain to come up
	I0120 14:25:43.967788 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | domain newest-cni-345509 has defined MAC address 52:54:00:90:e0:7e in network mk-newest-cni-345509
	I0120 14:25:43.968369 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | unable to find current IP address of domain newest-cni-345509 in network mk-newest-cni-345509
	I0120 14:25:43.968394 1977439 main.go:141] libmachine: (newest-cni-345509) DBG | I0120 14:25:43.968350 1977462 retry.go:31] will retry after 4.909102944s: waiting for domain to come up
	
	
	==> CRI-O <==
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.592994333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383148592967934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a29687b8-ef8e-4c2a-9cb7-63e657b69393 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.593662649Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4532700-cc65-4789-9b23-781a74c11dbd name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.593758537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4532700-cc65-4789-9b23-781a74c11dbd name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.594023829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6f657375efdb53cffff0adc6df9efae3906780516424446de56b1ec4d3364ad,PodSandboxId:50eac0bc57e05b2b33fbac3552aadcb92e2ea1dcee61a291eff29bfb66c4e69d,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737382870593202994,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-zjrbt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: c9263a9e-0a61-4ca8-8149-6a2b80230e0b,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a60289c0bf5ac1ec132e0fb0f6b6caaa68a88627aa832ff87079a786d397bb5,PodSandboxId:d0e92ff4b322c51d0d65f63c283e9b3d0d3d1f955e7d1631fc3b75a2a7cb8666,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737381904010486019,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-hvsl5,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: 46e04138-9d1c-46b8-a731-e36e24f32195,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba6bb456d752d3bdbab368738dffa57ac0c8aac392f4b605713c8d97e55f60e,PodSandboxId:fb49e2974fb8933547bf866e593dabbfbd892305da7b01a3448c842de5d44ffd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737381895224097805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12bde765-1258-4689-b448-64208dd30638,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad5bfa1e79b4f8af2beae628923bba45357591eab161f0561b8db444a91eade,PodSandboxId:efd0f5725ccd22f1f96da93bf4d61d1435703d719ad1f9ceeb672fb3dd79e0b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737381894134545158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2fbd7,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: d2cf52fe-b375-47cd-a4bf-d7ef8e07a4b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600724457b6b9a7e05ae381086c9ac8d8459886b1fe89052371e8eda011461e6,PodSandboxId:81a93c91b321371c8d740b227337cee5d57a4e1e06a2be29d40f24e567da94c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737381894098458339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-86xhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af72226-8186-40e7-a923-01381cc52731,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f973863199f28514df15d4602430f3d53d8b2cddf8d61926ee05d4008a80d9d,PodSandboxId:a56caf8f857e26f0810834fc0532d61aad8f39c4f1aa3f5a32d7ee52923be282,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737381892943209461,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kr6tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462ab3d1-c225-4319-bac8-926a1e43a14d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a9f6ca99f44f43093f142f996c2a59b01a07a70d53c57c746c57c9a7e687d77,PodSandboxId:670f516ac6f21739d1bbd5c0ebbfb77fde2c72b8d3975867b84ed1e916af478a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe412
7810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737381881816338441,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e5630aa0948b0c64e006de454d942e2,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b77a00016c81ae21c4c6a54f41f7a03e6daea7b40a27fcc18129d32b1937911b,PodSandboxId:9ab0c9c5f04ceecef2fbaa1a16fd54d0099f170cefbd26bd70187a876ee88646,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a
39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737381881821578787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c0305cd27e9c08360ae1277590706,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:033ac1e428c1fb03123cf80df48baa1553707be2cadb106dd717fc64c02b697c,PodSandboxId:791458b59a01edbe7901557e5adbdaf546737c3b80b471ad0b2b7941911b30ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0a
f08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737381881772986427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3978b83de5cac58d7eac30b0020136a4,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc003a7dfafb6efece53ff693b58f94e6e484ebf7ffc5acece80f3ed389368a,PodSandboxId:ccd90892ca820e05c898ef28caf7988b071d12aefe04aeba0303581667d9ac7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535
f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737381881707981417,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b32923d074762278ed6be9cdc1167454,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d4be9fd430f2b319fcacdeadb560de6d13ebd50f2c94806508ca54deacdd809,PodSandboxId:53664b1fd450c2fdcc421d9bcf3ccaedca7e0badf040519646909dd0b3c0c9bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737381590946224876,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c0305cd27e9c08360ae1277590706,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4532700-cc65-4789-9b23-781a74c11dbd name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.638080035Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=33016013-8514-460f-9c9c-9026ad5c4773 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.638176521Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=33016013-8514-460f-9c9c-9026ad5c4773 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.639230589Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa4deb6e-b447-4c4c-9447-d869306dd988 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.639729001Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383148639702429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa4deb6e-b447-4c4c-9447-d869306dd988 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.640264064Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=200d1dc7-e6d1-4932-b868-d768e9c34265 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.640336386Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=200d1dc7-e6d1-4932-b868-d768e9c34265 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.640628034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6f657375efdb53cffff0adc6df9efae3906780516424446de56b1ec4d3364ad,PodSandboxId:50eac0bc57e05b2b33fbac3552aadcb92e2ea1dcee61a291eff29bfb66c4e69d,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737382870593202994,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-zjrbt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: c9263a9e-0a61-4ca8-8149-6a2b80230e0b,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a60289c0bf5ac1ec132e0fb0f6b6caaa68a88627aa832ff87079a786d397bb5,PodSandboxId:d0e92ff4b322c51d0d65f63c283e9b3d0d3d1f955e7d1631fc3b75a2a7cb8666,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737381904010486019,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-hvsl5,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: 46e04138-9d1c-46b8-a731-e36e24f32195,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba6bb456d752d3bdbab368738dffa57ac0c8aac392f4b605713c8d97e55f60e,PodSandboxId:fb49e2974fb8933547bf866e593dabbfbd892305da7b01a3448c842de5d44ffd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737381895224097805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12bde765-1258-4689-b448-64208dd30638,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad5bfa1e79b4f8af2beae628923bba45357591eab161f0561b8db444a91eade,PodSandboxId:efd0f5725ccd22f1f96da93bf4d61d1435703d719ad1f9ceeb672fb3dd79e0b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737381894134545158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2fbd7,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: d2cf52fe-b375-47cd-a4bf-d7ef8e07a4b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600724457b6b9a7e05ae381086c9ac8d8459886b1fe89052371e8eda011461e6,PodSandboxId:81a93c91b321371c8d740b227337cee5d57a4e1e06a2be29d40f24e567da94c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737381894098458339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-86xhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af72226-8186-40e7-a923-01381cc52731,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f973863199f28514df15d4602430f3d53d8b2cddf8d61926ee05d4008a80d9d,PodSandboxId:a56caf8f857e26f0810834fc0532d61aad8f39c4f1aa3f5a32d7ee52923be282,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737381892943209461,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kr6tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462ab3d1-c225-4319-bac8-926a1e43a14d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a9f6ca99f44f43093f142f996c2a59b01a07a70d53c57c746c57c9a7e687d77,PodSandboxId:670f516ac6f21739d1bbd5c0ebbfb77fde2c72b8d3975867b84ed1e916af478a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe412
7810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737381881816338441,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e5630aa0948b0c64e006de454d942e2,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b77a00016c81ae21c4c6a54f41f7a03e6daea7b40a27fcc18129d32b1937911b,PodSandboxId:9ab0c9c5f04ceecef2fbaa1a16fd54d0099f170cefbd26bd70187a876ee88646,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a
39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737381881821578787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c0305cd27e9c08360ae1277590706,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:033ac1e428c1fb03123cf80df48baa1553707be2cadb106dd717fc64c02b697c,PodSandboxId:791458b59a01edbe7901557e5adbdaf546737c3b80b471ad0b2b7941911b30ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0a
f08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737381881772986427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3978b83de5cac58d7eac30b0020136a4,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc003a7dfafb6efece53ff693b58f94e6e484ebf7ffc5acece80f3ed389368a,PodSandboxId:ccd90892ca820e05c898ef28caf7988b071d12aefe04aeba0303581667d9ac7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535
f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737381881707981417,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b32923d074762278ed6be9cdc1167454,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d4be9fd430f2b319fcacdeadb560de6d13ebd50f2c94806508ca54deacdd809,PodSandboxId:53664b1fd450c2fdcc421d9bcf3ccaedca7e0badf040519646909dd0b3c0c9bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737381590946224876,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c0305cd27e9c08360ae1277590706,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=200d1dc7-e6d1-4932-b868-d768e9c34265 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.681564645Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=304441e6-775c-4663-a580-bc5cfafb4525 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.681665352Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=304441e6-775c-4663-a580-bc5cfafb4525 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.682912712Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12c05045-b30b-433c-8f61-6ccd2c7d1663 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.683317078Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383148683292070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12c05045-b30b-433c-8f61-6ccd2c7d1663 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.683898054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09952d04-e6a6-4b8d-a0b3-633fb5b63425 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.683981420Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09952d04-e6a6-4b8d-a0b3-633fb5b63425 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.684214986Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6f657375efdb53cffff0adc6df9efae3906780516424446de56b1ec4d3364ad,PodSandboxId:50eac0bc57e05b2b33fbac3552aadcb92e2ea1dcee61a291eff29bfb66c4e69d,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737382870593202994,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-zjrbt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: c9263a9e-0a61-4ca8-8149-6a2b80230e0b,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a60289c0bf5ac1ec132e0fb0f6b6caaa68a88627aa832ff87079a786d397bb5,PodSandboxId:d0e92ff4b322c51d0d65f63c283e9b3d0d3d1f955e7d1631fc3b75a2a7cb8666,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737381904010486019,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-hvsl5,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: 46e04138-9d1c-46b8-a731-e36e24f32195,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba6bb456d752d3bdbab368738dffa57ac0c8aac392f4b605713c8d97e55f60e,PodSandboxId:fb49e2974fb8933547bf866e593dabbfbd892305da7b01a3448c842de5d44ffd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737381895224097805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12bde765-1258-4689-b448-64208dd30638,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad5bfa1e79b4f8af2beae628923bba45357591eab161f0561b8db444a91eade,PodSandboxId:efd0f5725ccd22f1f96da93bf4d61d1435703d719ad1f9ceeb672fb3dd79e0b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737381894134545158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2fbd7,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: d2cf52fe-b375-47cd-a4bf-d7ef8e07a4b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600724457b6b9a7e05ae381086c9ac8d8459886b1fe89052371e8eda011461e6,PodSandboxId:81a93c91b321371c8d740b227337cee5d57a4e1e06a2be29d40f24e567da94c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737381894098458339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-86xhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af72226-8186-40e7-a923-01381cc52731,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f973863199f28514df15d4602430f3d53d8b2cddf8d61926ee05d4008a80d9d,PodSandboxId:a56caf8f857e26f0810834fc0532d61aad8f39c4f1aa3f5a32d7ee52923be282,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737381892943209461,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kr6tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462ab3d1-c225-4319-bac8-926a1e43a14d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a9f6ca99f44f43093f142f996c2a59b01a07a70d53c57c746c57c9a7e687d77,PodSandboxId:670f516ac6f21739d1bbd5c0ebbfb77fde2c72b8d3975867b84ed1e916af478a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe412
7810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737381881816338441,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e5630aa0948b0c64e006de454d942e2,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b77a00016c81ae21c4c6a54f41f7a03e6daea7b40a27fcc18129d32b1937911b,PodSandboxId:9ab0c9c5f04ceecef2fbaa1a16fd54d0099f170cefbd26bd70187a876ee88646,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a
39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737381881821578787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c0305cd27e9c08360ae1277590706,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:033ac1e428c1fb03123cf80df48baa1553707be2cadb106dd717fc64c02b697c,PodSandboxId:791458b59a01edbe7901557e5adbdaf546737c3b80b471ad0b2b7941911b30ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0a
f08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737381881772986427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3978b83de5cac58d7eac30b0020136a4,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc003a7dfafb6efece53ff693b58f94e6e484ebf7ffc5acece80f3ed389368a,PodSandboxId:ccd90892ca820e05c898ef28caf7988b071d12aefe04aeba0303581667d9ac7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535
f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737381881707981417,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b32923d074762278ed6be9cdc1167454,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d4be9fd430f2b319fcacdeadb560de6d13ebd50f2c94806508ca54deacdd809,PodSandboxId:53664b1fd450c2fdcc421d9bcf3ccaedca7e0badf040519646909dd0b3c0c9bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737381590946224876,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c0305cd27e9c08360ae1277590706,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09952d04-e6a6-4b8d-a0b3-633fb5b63425 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.731754991Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6e64cd8-d40d-4b9f-82ca-6b253f0349e5 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.731831158Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6e64cd8-d40d-4b9f-82ca-6b253f0349e5 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.732934862Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af9a01a0-8aaf-4769-b1ca-440f364e7222 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.733361293Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383148733330564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af9a01a0-8aaf-4769-b1ca-440f364e7222 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.734318534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30f11069-dbc2-4e54-bbab-458677adf755 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.734446696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30f11069-dbc2-4e54-bbab-458677adf755 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:48 no-preload-648067 crio[732]: time="2025-01-20 14:25:48.734992180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a6f657375efdb53cffff0adc6df9efae3906780516424446de56b1ec4d3364ad,PodSandboxId:50eac0bc57e05b2b33fbac3552aadcb92e2ea1dcee61a291eff29bfb66c4e69d,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737382870593202994,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-zjrbt,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: c9263a9e-0a61-4ca8-8149-6a2b80230e0b,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.c
ontainer.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a60289c0bf5ac1ec132e0fb0f6b6caaa68a88627aa832ff87079a786d397bb5,PodSandboxId:d0e92ff4b322c51d0d65f63c283e9b3d0d3d1f955e7d1631fc3b75a2a7cb8666,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737381904010486019,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-hvsl5,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kuberne
tes.pod.uid: 46e04138-9d1c-46b8-a731-e36e24f32195,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba6bb456d752d3bdbab368738dffa57ac0c8aac392f4b605713c8d97e55f60e,PodSandboxId:fb49e2974fb8933547bf866e593dabbfbd892305da7b01a3448c842de5d44ffd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737381895224097805,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: stor
age-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12bde765-1258-4689-b448-64208dd30638,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad5bfa1e79b4f8af2beae628923bba45357591eab161f0561b8db444a91eade,PodSandboxId:efd0f5725ccd22f1f96da93bf4d61d1435703d719ad1f9ceeb672fb3dd79e0b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737381894134545158,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2fbd7,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: d2cf52fe-b375-47cd-a4bf-d7ef8e07a4b7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600724457b6b9a7e05ae381086c9ac8d8459886b1fe89052371e8eda011461e6,PodSandboxId:81a93c91b321371c8d740b227337cee5d57a4e1e06a2be29d40f24e567da94c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc5675
91790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737381894098458339,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-86xhz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af72226-8186-40e7-a923-01381cc52731,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f973863199f28514df15d4602430f3d53d8b2cddf8d61926ee05d4008a80d9d,PodSandboxId:a56caf8f857e26f0810834fc0532d61aad8f39c4f1aa3f5a32d7ee52923be282,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Ima
ge:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737381892943209461,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kr6tq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 462ab3d1-c225-4319-bac8-926a1e43a14d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a9f6ca99f44f43093f142f996c2a59b01a07a70d53c57c746c57c9a7e687d77,PodSandboxId:670f516ac6f21739d1bbd5c0ebbfb77fde2c72b8d3975867b84ed1e916af478a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe412
7810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737381881816338441,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e5630aa0948b0c64e006de454d942e2,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b77a00016c81ae21c4c6a54f41f7a03e6daea7b40a27fcc18129d32b1937911b,PodSandboxId:9ab0c9c5f04ceecef2fbaa1a16fd54d0099f170cefbd26bd70187a876ee88646,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a
39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737381881821578787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c0305cd27e9c08360ae1277590706,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:033ac1e428c1fb03123cf80df48baa1553707be2cadb106dd717fc64c02b697c,PodSandboxId:791458b59a01edbe7901557e5adbdaf546737c3b80b471ad0b2b7941911b30ac,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0a
f08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737381881772986427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3978b83de5cac58d7eac30b0020136a4,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afc003a7dfafb6efece53ff693b58f94e6e484ebf7ffc5acece80f3ed389368a,PodSandboxId:ccd90892ca820e05c898ef28caf7988b071d12aefe04aeba0303581667d9ac7b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535
f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737381881707981417,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b32923d074762278ed6be9cdc1167454,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d4be9fd430f2b319fcacdeadb560de6d13ebd50f2c94806508ca54deacdd809,PodSandboxId:53664b1fd450c2fdcc421d9bcf3ccaedca7e0badf040519646909dd0b3c0c9bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737381590946224876,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-648067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1c0305cd27e9c08360ae1277590706,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=30f11069-dbc2-4e54-bbab-458677adf755 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	a6f657375efdb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 minutes ago       Exited              dashboard-metrics-scraper   8                   50eac0bc57e05       dashboard-metrics-scraper-86c6bf9756-zjrbt
	9a60289c0bf5a       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   20 minutes ago      Running             kubernetes-dashboard        0                   d0e92ff4b322c       kubernetes-dashboard-7779f9b69b-hvsl5
	6ba6bb456d752       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 minutes ago      Running             storage-provisioner         0                   fb49e2974fb89       storage-provisioner
	0ad5bfa1e79b4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           20 minutes ago      Running             coredns                     0                   efd0f5725ccd2       coredns-668d6bf9bc-2fbd7
	600724457b6b9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           20 minutes ago      Running             coredns                     0                   81a93c91b3213       coredns-668d6bf9bc-86xhz
	8f973863199f2       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08                                           20 minutes ago      Running             kube-proxy                  0                   a56caf8f857e2       kube-proxy-kr6tq
	b77a00016c81a       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                           21 minutes ago      Running             kube-apiserver              2                   9ab0c9c5f04ce       kube-apiserver-no-preload-648067
	0a9f6ca99f44f       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3                                           21 minutes ago      Running             kube-controller-manager     2                   670f516ac6f21       kube-controller-manager-no-preload-648067
	033ac1e428c1f       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5                                           21 minutes ago      Running             kube-scheduler              2                   791458b59a01e       kube-scheduler-no-preload-648067
	afc003a7dfafb       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           21 minutes ago      Running             etcd                        2                   ccd90892ca820       etcd-no-preload-648067
	7d4be9fd430f2       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                           25 minutes ago      Exited              kube-apiserver              1                   53664b1fd450c       kube-apiserver-no-preload-648067
	
	
	==> coredns [0ad5bfa1e79b4f8af2beae628923bba45357591eab161f0561b8db444a91eade] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [600724457b6b9a7e05ae381086c9ac8d8459886b1fe89052371e8eda011461e6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-648067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-648067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3
	                    minikube.k8s.io/name=no-preload-648067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T14_04_48_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 14:04:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-648067
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 14:25:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 14:22:08 +0000   Mon, 20 Jan 2025 14:04:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 14:22:08 +0000   Mon, 20 Jan 2025 14:04:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 14:22:08 +0000   Mon, 20 Jan 2025 14:04:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 14:22:08 +0000   Mon, 20 Jan 2025 14:04:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    no-preload-648067
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a60765c3ea542e39139c895e97563e1
	  System UUID:                3a60765c-3ea5-42e3-9139-c895e97563e1
	  Boot ID:                    d4772359-b5a1-42ed-99ba-125dc62f5b95
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-2fbd7                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 coredns-668d6bf9bc-86xhz                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 etcd-no-preload-648067                        100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-no-preload-648067              250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-no-preload-648067     200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-kr6tq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-no-preload-648067              100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-9kb5f                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-zjrbt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-hvsl5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 20m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node no-preload-648067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node no-preload-648067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node no-preload-648067 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20m   node-controller  Node no-preload-648067 event: Registered Node no-preload-648067 in Controller
	
	
	==> dmesg <==
	[  +2.935008] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.508942] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.975134] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.064934] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063291] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.210327] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.121712] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[  +0.306894] systemd-fstab-generator[721]: Ignoring "noauto" option for root device
	[ +16.811645] systemd-fstab-generator[1322]: Ignoring "noauto" option for root device
	[  +0.059349] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.802234] systemd-fstab-generator[1445]: Ignoring "noauto" option for root device
	[  +3.674152] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.471290] kauditd_printk_skb: 57 callbacks suppressed
	[Jan20 14:00] kauditd_printk_skb: 28 callbacks suppressed
	[Jan20 14:04] systemd-fstab-generator[3193]: Ignoring "noauto" option for root device
	[  +0.074399] kauditd_printk_skb: 9 callbacks suppressed
	[  +7.014343] systemd-fstab-generator[3525]: Ignoring "noauto" option for root device
	[  +0.100991] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.308720] systemd-fstab-generator[3644]: Ignoring "noauto" option for root device
	[  +0.129239] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.062813] kauditd_printk_skb: 102 callbacks suppressed
	[Jan20 14:05] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.139909] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [afc003a7dfafb6efece53ff693b58f94e6e484ebf7ffc5acece80f3ed389368a] <==
	{"level":"info","ts":"2025-01-20T14:04:42.301747Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1be8679029844888","local-member-id":"4f06aa0eaa8889d9","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T14:04:42.301841Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T14:04:42.301917Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T14:04:42.317135Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-20T14:05:01.301275Z","caller":"traceutil/trace.go:171","msg":"trace[2131723334] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"125.215431ms","start":"2025-01-20T14:05:01.176031Z","end":"2025-01-20T14:05:01.301247Z","steps":["trace[2131723334] 'process raft request'  (duration: 125.035091ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T14:05:03.807316Z","caller":"traceutil/trace.go:171","msg":"trace[303063183] linearizableReadLoop","detail":"{readStateIndex:506; appliedIndex:505; }","duration":"188.751741ms","start":"2025-01-20T14:05:03.618547Z","end":"2025-01-20T14:05:03.807299Z","steps":["trace[303063183] 'read index received'  (duration: 188.595835ms)","trace[303063183] 'applied index is now lower than readState.Index'  (duration: 155.456µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T14:05:03.808018Z","caller":"traceutil/trace.go:171","msg":"trace[1891529022] transaction","detail":"{read_only:false; response_revision:492; number_of_response:1; }","duration":"283.755432ms","start":"2025-01-20T14:05:03.524253Z","end":"2025-01-20T14:05:03.808009Z","steps":["trace[1891529022] 'process raft request'  (duration: 282.945603ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T14:05:03.807795Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.214727ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T14:05:03.809376Z","caller":"traceutil/trace.go:171","msg":"trace[734931273] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:492; }","duration":"190.879582ms","start":"2025-01-20T14:05:03.618484Z","end":"2025-01-20T14:05:03.809363Z","steps":["trace[734931273] 'agreement among raft nodes before linearized reading'  (duration: 189.218151ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T14:05:05.262289Z","caller":"traceutil/trace.go:171","msg":"trace[1619553536] transaction","detail":"{read_only:false; response_revision:498; number_of_response:1; }","duration":"125.430218ms","start":"2025-01-20T14:05:05.136845Z","end":"2025-01-20T14:05:05.262275Z","steps":["trace[1619553536] 'process raft request'  (duration: 125.033833ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T14:05:07.964892Z","caller":"traceutil/trace.go:171","msg":"trace[244391180] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"121.679971ms","start":"2025-01-20T14:05:07.843186Z","end":"2025-01-20T14:05:07.964866Z","steps":["trace[244391180] 'process raft request'  (duration: 121.398204ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T14:14:42.952073Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":832}
	{"level":"info","ts":"2025-01-20T14:14:42.984712Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":832,"took":"31.985623ms","hash":3484075542,"current-db-size-bytes":2924544,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2924544,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2025-01-20T14:14:42.984796Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3484075542,"revision":832,"compact-revision":-1}
	{"level":"info","ts":"2025-01-20T14:19:42.961834Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1084}
	{"level":"info","ts":"2025-01-20T14:19:42.968439Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1084,"took":"5.696775ms","hash":2590472209,"current-db-size-bytes":2924544,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1740800,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-01-20T14:19:42.968548Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2590472209,"revision":1084,"compact-revision":832}
	{"level":"info","ts":"2025-01-20T14:24:42.968886Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1335}
	{"level":"info","ts":"2025-01-20T14:24:42.973603Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1335,"took":"3.972363ms","hash":398359969,"current-db-size-bytes":2924544,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1769472,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-20T14:24:42.973711Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":398359969,"revision":1335,"compact-revision":1084}
	{"level":"warn","ts":"2025-01-20T14:25:42.352217Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.786676ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9933133747927427960 > lease_revoke:<id:09d9948407a8aae2>","response":"size:27"}
	{"level":"info","ts":"2025-01-20T14:25:42.352782Z","caller":"traceutil/trace.go:171","msg":"trace[2014137747] transaction","detail":"{read_only:false; response_revision:1636; number_of_response:1; }","duration":"155.928765ms","start":"2025-01-20T14:25:42.196821Z","end":"2025-01-20T14:25:42.352750Z","steps":["trace[2014137747] 'process raft request'  (duration: 155.811139ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T14:25:42.353015Z","caller":"traceutil/trace.go:171","msg":"trace[1980745775] linearizableReadLoop","detail":"{readStateIndex:1909; appliedIndex:1908; }","duration":"217.087742ms","start":"2025-01-20T14:25:42.135913Z","end":"2025-01-20T14:25:42.353000Z","steps":["trace[1980745775] 'read index received'  (duration: 20.197357ms)","trace[1980745775] 'applied index is now lower than readState.Index'  (duration: 196.889869ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T14:25:42.353154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.206515ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T14:25:42.353194Z","caller":"traceutil/trace.go:171","msg":"trace[1471442622] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1636; }","duration":"217.301159ms","start":"2025-01-20T14:25:42.135881Z","end":"2025-01-20T14:25:42.353182Z","steps":["trace[1471442622] 'agreement among raft nodes before linearized reading'  (duration: 217.197ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:25:49 up 26 min,  0 users,  load average: 0.17, 0.19, 0.18
	Linux no-preload-648067 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7d4be9fd430f2b319fcacdeadb560de6d13ebd50f2c94806508ca54deacdd809] <==
	W0120 14:04:36.911132       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:36.979108       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:37.005962       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:37.161612       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:37.215836       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:37.253022       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:37.290451       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:37.337136       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:37.366249       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:37.379980       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:37.396671       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:37.438703       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:37.447174       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:37.813261       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:37.871601       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:37.933097       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:38.117262       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:38.119698       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:38.160715       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:38.162225       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:38.187734       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:38.244674       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:38.258528       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:38.328261       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:04:38.928934       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b77a00016c81ae21c4c6a54f41f7a03e6daea7b40a27fcc18129d32b1937911b] <==
	I0120 14:22:45.755735       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 14:22:45.755871       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 14:24:44.756962       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:24:44.757195       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0120 14:24:45.759912       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:24:45.760050       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0120 14:24:45.760117       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:24:45.760187       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 14:24:45.761342       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 14:24:45.761559       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 14:25:45.761721       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:25:45.761820       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0120 14:25:45.761722       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:25:45.761906       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 14:25:45.763869       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 14:25:45.763925       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0a9f6ca99f44f43093f142f996c2a59b01a07a70d53c57c746c57c9a7e687d77] <==
	E0120 14:20:51.516681       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:20:51.600181       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 14:21:03.600059       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="224.776µs"
	I0120 14:21:11.158253       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="75.852µs"
	I0120 14:21:14.598743       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="192.911µs"
	I0120 14:21:17.620049       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="43.203µs"
	E0120 14:21:21.526365       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:21:21.608006       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:21:51.535663       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:21:51.616555       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 14:22:08.463955       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-648067"
	E0120 14:22:21.543112       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:22:21.624981       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:22:51.550703       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:22:51.634657       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:23:21.557594       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:23:21.646155       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:23:51.567004       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:23:51.657265       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:24:21.573659       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:24:21.667088       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:24:51.584111       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:24:51.675524       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:25:21.592058       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:25:21.684014       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [8f973863199f28514df15d4602430f3d53d8b2cddf8d61926ee05d4008a80d9d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 14:04:53.678176       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 14:04:53.692259       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.76"]
	E0120 14:04:53.693635       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 14:04:54.047193       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 14:04:54.047238       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 14:04:54.047276       1 server_linux.go:170] "Using iptables Proxier"
	I0120 14:04:54.055937       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 14:04:54.056196       1 server.go:497] "Version info" version="v1.32.0"
	I0120 14:04:54.056231       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 14:04:54.058598       1 config.go:199] "Starting service config controller"
	I0120 14:04:54.058639       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 14:04:54.058668       1 config.go:105] "Starting endpoint slice config controller"
	I0120 14:04:54.058672       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 14:04:54.059174       1 config.go:329] "Starting node config controller"
	I0120 14:04:54.059206       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 14:04:54.159008       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0120 14:04:54.159061       1 shared_informer.go:320] Caches are synced for service config
	I0120 14:04:54.159324       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [033ac1e428c1fb03123cf80df48baa1553707be2cadb106dd717fc64c02b697c] <==
	W0120 14:04:45.696473       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 14:04:45.696580       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:04:45.697650       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 14:04:45.697692       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 14:04:45.715582       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0120 14:04:45.715672       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 14:04:45.720923       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 14:04:45.721171       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 14:04:45.777066       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 14:04:45.777194       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0120 14:04:45.895699       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0120 14:04:45.895752       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:04:46.027004       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0120 14:04:46.028099       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0120 14:04:46.090182       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0120 14:04:46.090239       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:04:46.092123       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0120 14:04:46.092242       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:04:46.112453       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0120 14:04:46.112596       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:04:46.121668       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0120 14:04:46.121764       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 14:04:46.122577       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 14:04:46.122602       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0120 14:04:48.169022       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 14:25:08 no-preload-648067 kubelet[3532]: E0120 14:25:08.070072    3532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383108068955503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:25:10 no-preload-648067 kubelet[3532]: E0120 14:25:10.579539    3532 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-9kb5f" podUID="fb8dd9df-cd37-4779-af22-4abd91dbc421"
	Jan 20 14:25:15 no-preload-648067 kubelet[3532]: I0120 14:25:15.578470    3532 scope.go:117] "RemoveContainer" containerID="a6f657375efdb53cffff0adc6df9efae3906780516424446de56b1ec4d3364ad"
	Jan 20 14:25:15 no-preload-648067 kubelet[3532]: E0120 14:25:15.578697    3532 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-zjrbt_kubernetes-dashboard(c9263a9e-0a61-4ca8-8149-6a2b80230e0b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-zjrbt" podUID="c9263a9e-0a61-4ca8-8149-6a2b80230e0b"
	Jan 20 14:25:18 no-preload-648067 kubelet[3532]: E0120 14:25:18.072319    3532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383118071804639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:25:18 no-preload-648067 kubelet[3532]: E0120 14:25:18.072855    3532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383118071804639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:25:25 no-preload-648067 kubelet[3532]: E0120 14:25:25.579511    3532 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-9kb5f" podUID="fb8dd9df-cd37-4779-af22-4abd91dbc421"
	Jan 20 14:25:27 no-preload-648067 kubelet[3532]: I0120 14:25:27.581685    3532 scope.go:117] "RemoveContainer" containerID="a6f657375efdb53cffff0adc6df9efae3906780516424446de56b1ec4d3364ad"
	Jan 20 14:25:27 no-preload-648067 kubelet[3532]: E0120 14:25:27.582007    3532 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-zjrbt_kubernetes-dashboard(c9263a9e-0a61-4ca8-8149-6a2b80230e0b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-zjrbt" podUID="c9263a9e-0a61-4ca8-8149-6a2b80230e0b"
	Jan 20 14:25:28 no-preload-648067 kubelet[3532]: E0120 14:25:28.075544    3532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383128075100897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:25:28 no-preload-648067 kubelet[3532]: E0120 14:25:28.075629    3532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383128075100897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:25:38 no-preload-648067 kubelet[3532]: E0120 14:25:38.079991    3532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383138079142575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:25:38 no-preload-648067 kubelet[3532]: E0120 14:25:38.080044    3532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383138079142575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:25:38 no-preload-648067 kubelet[3532]: I0120 14:25:38.578365    3532 scope.go:117] "RemoveContainer" containerID="a6f657375efdb53cffff0adc6df9efae3906780516424446de56b1ec4d3364ad"
	Jan 20 14:25:38 no-preload-648067 kubelet[3532]: E0120 14:25:38.579064    3532 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-zjrbt_kubernetes-dashboard(c9263a9e-0a61-4ca8-8149-6a2b80230e0b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-zjrbt" podUID="c9263a9e-0a61-4ca8-8149-6a2b80230e0b"
	Jan 20 14:25:39 no-preload-648067 kubelet[3532]: E0120 14:25:39.579892    3532 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-9kb5f" podUID="fb8dd9df-cd37-4779-af22-4abd91dbc421"
	Jan 20 14:25:47 no-preload-648067 kubelet[3532]: E0120 14:25:47.631094    3532 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 20 14:25:47 no-preload-648067 kubelet[3532]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 20 14:25:47 no-preload-648067 kubelet[3532]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 20 14:25:47 no-preload-648067 kubelet[3532]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 20 14:25:47 no-preload-648067 kubelet[3532]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 20 14:25:48 no-preload-648067 kubelet[3532]: E0120 14:25:48.082186    3532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383148081775686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:25:48 no-preload-648067 kubelet[3532]: E0120 14:25:48.082228    3532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383148081775686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:152099,},InodesUsed:&UInt64Value{Value:56,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:25:49 no-preload-648067 kubelet[3532]: I0120 14:25:49.582466    3532 scope.go:117] "RemoveContainer" containerID="a6f657375efdb53cffff0adc6df9efae3906780516424446de56b1ec4d3364ad"
	Jan 20 14:25:49 no-preload-648067 kubelet[3532]: E0120 14:25:49.582770    3532 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-zjrbt_kubernetes-dashboard(c9263a9e-0a61-4ca8-8149-6a2b80230e0b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-zjrbt" podUID="c9263a9e-0a61-4ca8-8149-6a2b80230e0b"
	
	
	==> kubernetes-dashboard [9a60289c0bf5ac1ec132e0fb0f6b6caaa68a88627aa832ff87079a786d397bb5] <==
	2025/01/20 14:13:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:14:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:14:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:15:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:15:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:16:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:16:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:17:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:17:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:18:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:18:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:19:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:19:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:20:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:20:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:21:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:21:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:22:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:22:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:23:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:23:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:24:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:24:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:25:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:25:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6ba6bb456d752d3bdbab368738dffa57ac0c8aac392f4b605713c8d97e55f60e] <==
	I0120 14:04:55.410014       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 14:04:55.436349       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 14:04:55.436612       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 14:04:55.453246       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 14:04:55.453485       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-648067_3bd77ab1-c253-4a60-a146-5376cf2b04e5!
	I0120 14:04:55.454630       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5d6d54ed-e662-4fe7-a487-6f02bb576c60", APIVersion:"v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-648067_3bd77ab1-c253-4a60-a146-5376cf2b04e5 became leader
	I0120 14:04:55.554555       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-648067_3bd77ab1-c253-4a60-a146-5376cf2b04e5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-648067 -n no-preload-648067
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-648067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-9kb5f
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-648067 describe pod metrics-server-f79f97bbb-9kb5f
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-648067 describe pod metrics-server-f79f97bbb-9kb5f: exit status 1 (70.992583ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-9kb5f" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-648067 describe pod metrics-server-f79f97bbb-9kb5f: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (1600.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-191446 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-191446 create -f testdata/busybox.yaml: exit status 1 (47.155943ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-191446" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-191446 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191446 -n old-k8s-version-191446
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191446 -n old-k8s-version-191446: exit status 6 (250.920587ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 13:59:53.671178 1970347 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-191446" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-191446" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191446 -n old-k8s-version-191446
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191446 -n old-k8s-version-191446: exit status 6 (275.145212ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 13:59:53.945330 1970376 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-191446" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-191446" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-191446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-191446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.575003923s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-191446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-191446 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-191446 describe deploy/metrics-server -n kube-system: exit status 1 (49.512931ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-191446" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-191446 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191446 -n old-k8s-version-191446
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191446 -n old-k8s-version-191446: exit status 6 (254.204664ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 14:01:23.822822 1971041 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-191446" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-191446" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (1624.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-647109 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-647109 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: signal: killed (27m2.2877664s)

                                                
                                                
-- stdout --
	* [embed-certs-647109] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "embed-certs-647109" primary control-plane node in "embed-certs-647109" cluster
	* Restarting existing kvm2 VM for "embed-certs-647109" ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-647109 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 14:00:03.843296 1970602 out.go:345] Setting OutFile to fd 1 ...
	I0120 14:00:03.843414 1970602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:00:03.843426 1970602 out.go:358] Setting ErrFile to fd 2...
	I0120 14:00:03.843432 1970602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:00:03.843625 1970602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 14:00:03.844227 1970602 out.go:352] Setting JSON to false
	I0120 14:00:03.845278 1970602 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":20550,"bootTime":1737361054,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 14:00:03.845421 1970602 start.go:139] virtualization: kvm guest
	I0120 14:00:03.847881 1970602 out.go:177] * [embed-certs-647109] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 14:00:03.849446 1970602 notify.go:220] Checking for updates...
	I0120 14:00:03.849468 1970602 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 14:00:03.850889 1970602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 14:00:03.852399 1970602 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:00:03.854002 1970602 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 14:00:03.855474 1970602 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 14:00:03.856813 1970602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 14:00:03.858526 1970602 config.go:182] Loaded profile config "embed-certs-647109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:00:03.858967 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:00:03.859048 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:00:03.875085 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46171
	I0120 14:00:03.875561 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:00:03.876146 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:00:03.876178 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:00:03.876546 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:00:03.876751 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:00:03.877022 1970602 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 14:00:03.877363 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:00:03.877416 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:00:03.893906 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37979
	I0120 14:00:03.894443 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:00:03.895093 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:00:03.895133 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:00:03.895501 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:00:03.895672 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:00:03.936801 1970602 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 14:00:03.938413 1970602 start.go:297] selected driver: kvm2
	I0120 14:00:03.938434 1970602 start.go:901] validating driver "kvm2" against &{Name:embed-certs-647109 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-647109 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:00:03.938633 1970602 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 14:00:03.939356 1970602 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:00:03.939438 1970602 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-1920423/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 14:00:03.956603 1970602 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 14:00:03.957050 1970602 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 14:00:03.957097 1970602 cni.go:84] Creating CNI manager for ""
	I0120 14:00:03.957161 1970602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:00:03.957209 1970602 start.go:340] cluster config:
	{Name:embed-certs-647109 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-647109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:00:03.957351 1970602 iso.go:125] acquiring lock: {Name:mk18c81b0efa5cc8efe9d47dc52684752f206ae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:00:03.959333 1970602 out.go:177] * Starting "embed-certs-647109" primary control-plane node in "embed-certs-647109" cluster
	I0120 14:00:03.960851 1970602 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 14:00:03.960901 1970602 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 14:00:03.960912 1970602 cache.go:56] Caching tarball of preloaded images
	I0120 14:00:03.961026 1970602 preload.go:172] Found /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 14:00:03.961038 1970602 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 14:00:03.961204 1970602 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/config.json ...
	I0120 14:00:03.961467 1970602 start.go:360] acquireMachinesLock for embed-certs-647109: {Name:mkbf148c6fec0c722eed081be3cef9d0990100c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 14:00:03.961541 1970602 start.go:364] duration metric: took 45.509µs to acquireMachinesLock for "embed-certs-647109"
	I0120 14:00:03.961562 1970602 start.go:96] Skipping create...Using existing machine configuration
	I0120 14:00:03.961570 1970602 fix.go:54] fixHost starting: 
	I0120 14:00:03.961866 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:00:03.961913 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:00:03.977673 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36473
	I0120 14:00:03.978215 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:00:03.978772 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:00:03.978806 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:00:03.979284 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:00:03.979533 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:00:03.979756 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:00:03.981642 1970602 fix.go:112] recreateIfNeeded on embed-certs-647109: state=Stopped err=<nil>
	I0120 14:00:03.981672 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	W0120 14:00:03.981845 1970602 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 14:00:03.984024 1970602 out.go:177] * Restarting existing kvm2 VM for "embed-certs-647109" ...
	I0120 14:00:03.985333 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Start
	I0120 14:00:03.985596 1970602 main.go:141] libmachine: (embed-certs-647109) starting domain...
	I0120 14:00:03.985621 1970602 main.go:141] libmachine: (embed-certs-647109) ensuring networks are active...
	I0120 14:00:03.986717 1970602 main.go:141] libmachine: (embed-certs-647109) Ensuring network default is active
	I0120 14:00:03.987066 1970602 main.go:141] libmachine: (embed-certs-647109) Ensuring network mk-embed-certs-647109 is active
	I0120 14:00:03.987549 1970602 main.go:141] libmachine: (embed-certs-647109) getting domain XML...
	I0120 14:00:03.988345 1970602 main.go:141] libmachine: (embed-certs-647109) creating domain...
	I0120 14:00:05.294462 1970602 main.go:141] libmachine: (embed-certs-647109) waiting for IP...
	I0120 14:00:05.295689 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:05.296179 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 14:00:05.296292 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 14:00:05.296171 1970637 retry.go:31] will retry after 303.379987ms: waiting for domain to come up
	I0120 14:00:05.601080 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:05.601633 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 14:00:05.601663 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 14:00:05.601604 1970637 retry.go:31] will retry after 292.388682ms: waiting for domain to come up
	I0120 14:00:05.895387 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:05.895964 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 14:00:05.896029 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 14:00:05.895933 1970637 retry.go:31] will retry after 475.151793ms: waiting for domain to come up
	I0120 14:00:06.372684 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:06.373287 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 14:00:06.373332 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 14:00:06.373254 1970637 retry.go:31] will retry after 512.398234ms: waiting for domain to come up
	I0120 14:00:06.887004 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:06.887477 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 14:00:06.887517 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 14:00:06.887450 1970637 retry.go:31] will retry after 644.847622ms: waiting for domain to come up
	I0120 14:00:07.534814 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:07.535344 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 14:00:07.535373 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 14:00:07.535291 1970637 retry.go:31] will retry after 924.213776ms: waiting for domain to come up
	I0120 14:00:08.461009 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:08.461503 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 14:00:08.461533 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 14:00:08.461455 1970637 retry.go:31] will retry after 1.154573746s: waiting for domain to come up
	I0120 14:00:09.618007 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:09.618537 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 14:00:09.618564 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 14:00:09.618510 1970637 retry.go:31] will retry after 1.14696043s: waiting for domain to come up
	I0120 14:00:10.766846 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:10.767484 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 14:00:10.767517 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 14:00:10.767422 1970637 retry.go:31] will retry after 1.13356652s: waiting for domain to come up
	I0120 14:00:11.903244 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:11.903783 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 14:00:11.903840 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 14:00:11.903743 1970637 retry.go:31] will retry after 1.524578014s: waiting for domain to come up
	I0120 14:00:13.430883 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:13.431485 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 14:00:13.431519 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 14:00:13.431436 1970637 retry.go:31] will retry after 2.205411088s: waiting for domain to come up
	I0120 14:00:15.638236 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:15.638819 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 14:00:15.638852 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 14:00:15.638778 1970637 retry.go:31] will retry after 2.685880588s: waiting for domain to come up
	I0120 14:00:18.326202 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:18.326754 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 14:00:18.326785 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 14:00:18.326717 1970637 retry.go:31] will retry after 2.941907703s: waiting for domain to come up
	I0120 14:00:21.271956 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:21.272586 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | unable to find current IP address of domain embed-certs-647109 in network mk-embed-certs-647109
	I0120 14:00:21.272618 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | I0120 14:00:21.272515 1970637 retry.go:31] will retry after 5.427921704s: waiting for domain to come up
	I0120 14:00:26.702348 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:26.702920 1970602 main.go:141] libmachine: (embed-certs-647109) found domain IP: 192.168.50.62
	I0120 14:00:26.702951 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has current primary IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:26.702959 1970602 main.go:141] libmachine: (embed-certs-647109) reserving static IP address...
	I0120 14:00:26.703473 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "embed-certs-647109", mac: "52:54:00:31:ac:09", ip: "192.168.50.62"} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:00:26.703515 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | skip adding static IP to network mk-embed-certs-647109 - found existing host DHCP lease matching {name: "embed-certs-647109", mac: "52:54:00:31:ac:09", ip: "192.168.50.62"}
	I0120 14:00:26.703533 1970602 main.go:141] libmachine: (embed-certs-647109) reserved static IP address 192.168.50.62 for domain embed-certs-647109
	I0120 14:00:26.703549 1970602 main.go:141] libmachine: (embed-certs-647109) waiting for SSH...
	I0120 14:00:26.703564 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Getting to WaitForSSH function...
	I0120 14:00:26.705819 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:26.706235 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:00:26.706260 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:26.706379 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Using SSH client type: external
	I0120 14:00:26.706411 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa (-rw-------)
	I0120 14:00:26.706445 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 14:00:26.706455 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | About to run SSH command:
	I0120 14:00:26.706463 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | exit 0
	I0120 14:00:26.831405 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | SSH cmd err, output: <nil>: 
	I0120 14:00:26.831970 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetConfigRaw
	I0120 14:00:26.832792 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetIP
	I0120 14:00:26.835846 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:26.836259 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:00:26.836290 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:26.836556 1970602 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/config.json ...
	I0120 14:00:26.836779 1970602 machine.go:93] provisionDockerMachine start ...
	I0120 14:00:26.836803 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:00:26.837045 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:00:26.839622 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:26.839966 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:00:26.840022 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:26.840151 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:00:26.840367 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:00:26.840515 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:00:26.840687 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:00:26.840876 1970602 main.go:141] libmachine: Using SSH client type: native
	I0120 14:00:26.841133 1970602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0120 14:00:26.841148 1970602 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 14:00:26.944606 1970602 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 14:00:26.944643 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetMachineName
	I0120 14:00:26.944981 1970602 buildroot.go:166] provisioning hostname "embed-certs-647109"
	I0120 14:00:26.945004 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetMachineName
	I0120 14:00:26.945221 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:00:26.948341 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:26.948665 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:00:26.948696 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:26.948881 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:00:26.949114 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:00:26.949279 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:00:26.949419 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:00:26.949595 1970602 main.go:141] libmachine: Using SSH client type: native
	I0120 14:00:26.949830 1970602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0120 14:00:26.949849 1970602 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-647109 && echo "embed-certs-647109" | sudo tee /etc/hostname
	I0120 14:00:27.066858 1970602 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-647109
	
	I0120 14:00:27.066896 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:00:27.070187 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:27.070780 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:00:27.070807 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:27.071144 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:00:27.071415 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:00:27.071623 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:00:27.071795 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:00:27.071970 1970602 main.go:141] libmachine: Using SSH client type: native
	I0120 14:00:27.072222 1970602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0120 14:00:27.072251 1970602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-647109' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-647109/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-647109' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 14:00:27.187251 1970602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 14:00:27.187291 1970602 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 14:00:27.187329 1970602 buildroot.go:174] setting up certificates
	I0120 14:00:27.187347 1970602 provision.go:84] configureAuth start
	I0120 14:00:27.187363 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetMachineName
	I0120 14:00:27.187691 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetIP
	I0120 14:00:27.190674 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:27.191148 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:00:27.191185 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:27.191423 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:00:27.194160 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:27.194633 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:00:27.194675 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:27.194800 1970602 provision.go:143] copyHostCerts
	I0120 14:00:27.194875 1970602 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem, removing ...
	I0120 14:00:27.194893 1970602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem
	I0120 14:00:27.194962 1970602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 14:00:27.195065 1970602 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem, removing ...
	I0120 14:00:27.195073 1970602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem
	I0120 14:00:27.195096 1970602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 14:00:27.195259 1970602 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem, removing ...
	I0120 14:00:27.195270 1970602 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem
	I0120 14:00:27.195302 1970602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 14:00:27.195405 1970602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.embed-certs-647109 san=[127.0.0.1 192.168.50.62 embed-certs-647109 localhost minikube]
	I0120 14:00:27.391280 1970602 provision.go:177] copyRemoteCerts
	I0120 14:00:27.391350 1970602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 14:00:27.391378 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:00:27.394224 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:27.394575 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:00:27.394628 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:27.394807 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:00:27.395021 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:00:27.395201 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:00:27.395346 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:00:27.478775 1970602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 14:00:27.504345 1970602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 14:00:27.531315 1970602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0120 14:00:27.556872 1970602 provision.go:87] duration metric: took 369.504011ms to configureAuth
	I0120 14:00:27.556921 1970602 buildroot.go:189] setting minikube options for container-runtime
	I0120 14:00:27.557203 1970602 config.go:182] Loaded profile config "embed-certs-647109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:00:27.557325 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:00:27.560517 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:27.561045 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:00:27.561088 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:27.561300 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:00:27.561535 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:00:27.561699 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:00:27.561822 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:00:27.561983 1970602 main.go:141] libmachine: Using SSH client type: native
	I0120 14:00:27.562188 1970602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0120 14:00:27.562210 1970602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 14:00:27.791177 1970602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 14:00:27.791219 1970602 machine.go:96] duration metric: took 954.42495ms to provisionDockerMachine
	I0120 14:00:27.791233 1970602 start.go:293] postStartSetup for "embed-certs-647109" (driver="kvm2")
	I0120 14:00:27.791243 1970602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 14:00:27.791275 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:00:27.791649 1970602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 14:00:27.791700 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:00:27.794639 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:27.794961 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:00:27.794992 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:27.795151 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:00:27.795368 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:00:27.795552 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:00:27.795713 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:00:27.877291 1970602 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 14:00:27.881756 1970602 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 14:00:27.881792 1970602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 14:00:27.881874 1970602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 14:00:27.881981 1970602 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem -> 19276722.pem in /etc/ssl/certs
	I0120 14:00:27.882110 1970602 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 14:00:27.892029 1970602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:00:27.918302 1970602 start.go:296] duration metric: took 127.051853ms for postStartSetup
	I0120 14:00:27.918353 1970602 fix.go:56] duration metric: took 23.956784552s for fixHost
	I0120 14:00:27.918378 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:00:27.921205 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:27.921633 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:00:27.921675 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:27.921855 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:00:27.922080 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:00:27.922231 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:00:27.922417 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:00:27.922563 1970602 main.go:141] libmachine: Using SSH client type: native
	I0120 14:00:27.922824 1970602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.62 22 <nil> <nil>}
	I0120 14:00:27.922838 1970602 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 14:00:28.023946 1970602 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381627.998286221
	
	I0120 14:00:28.023970 1970602 fix.go:216] guest clock: 1737381627.998286221
	I0120 14:00:28.023999 1970602 fix.go:229] Guest: 2025-01-20 14:00:27.998286221 +0000 UTC Remote: 2025-01-20 14:00:27.91835801 +0000 UTC m=+24.116200024 (delta=79.928211ms)
	I0120 14:00:28.024023 1970602 fix.go:200] guest clock delta is within tolerance: 79.928211ms
	I0120 14:00:28.024028 1970602 start.go:83] releasing machines lock for "embed-certs-647109", held for 24.062476125s
	I0120 14:00:28.024050 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:00:28.024394 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetIP
	I0120 14:00:28.027623 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:28.028007 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:00:28.028047 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:28.028181 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:00:28.028804 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:00:28.028998 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:00:28.029091 1970602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 14:00:28.029155 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:00:28.029244 1970602 ssh_runner.go:195] Run: cat /version.json
	I0120 14:00:28.029262 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:00:28.031894 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:28.032092 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:28.032277 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:00:28.032318 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:28.032346 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:00:28.032362 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:28.032413 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:00:28.032592 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:00:28.032769 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:00:28.032782 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:00:28.032938 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:00:28.032947 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:00:28.033090 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:00:28.033216 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:00:28.139699 1970602 ssh_runner.go:195] Run: systemctl --version
	I0120 14:00:28.146592 1970602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 14:00:28.305179 1970602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 14:00:28.312067 1970602 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 14:00:28.312146 1970602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 14:00:28.329787 1970602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 14:00:28.329817 1970602 start.go:495] detecting cgroup driver to use...
	I0120 14:00:28.329907 1970602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 14:00:28.348950 1970602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 14:00:28.365216 1970602 docker.go:217] disabling cri-docker service (if available) ...
	I0120 14:00:28.365280 1970602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 14:00:28.381548 1970602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 14:00:28.399038 1970602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 14:00:28.525248 1970602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 14:00:28.684492 1970602 docker.go:233] disabling docker service ...
	I0120 14:00:28.684565 1970602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 14:00:28.701454 1970602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 14:00:28.718188 1970602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 14:00:28.846964 1970602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 14:00:28.980616 1970602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 14:00:28.995740 1970602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 14:00:29.015659 1970602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 14:00:29.015746 1970602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:00:29.027153 1970602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 14:00:29.027224 1970602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:00:29.038874 1970602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:00:29.051903 1970602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:00:29.063553 1970602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 14:00:29.075366 1970602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:00:29.087279 1970602 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:00:29.105873 1970602 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:00:29.117282 1970602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 14:00:29.127811 1970602 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 14:00:29.127887 1970602 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 14:00:29.142487 1970602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 14:00:29.153820 1970602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:00:29.279678 1970602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 14:00:29.379277 1970602 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 14:00:29.379370 1970602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 14:00:29.384595 1970602 start.go:563] Will wait 60s for crictl version
	I0120 14:00:29.384667 1970602 ssh_runner.go:195] Run: which crictl
	I0120 14:00:29.388782 1970602 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 14:00:29.430756 1970602 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 14:00:29.430853 1970602 ssh_runner.go:195] Run: crio --version
	I0120 14:00:29.461897 1970602 ssh_runner.go:195] Run: crio --version
	I0120 14:00:29.493729 1970602 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 14:00:29.495007 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetIP
	I0120 14:00:29.497711 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:29.497994 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:00:29.498034 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:00:29.498318 1970602 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 14:00:29.502626 1970602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:00:29.515505 1970602 kubeadm.go:883] updating cluster {Name:embed-certs-647109 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-647109 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 14:00:29.515637 1970602 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 14:00:29.515690 1970602 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:00:29.559492 1970602 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 14:00:29.559565 1970602 ssh_runner.go:195] Run: which lz4
	I0120 14:00:29.564312 1970602 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 14:00:29.568990 1970602 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 14:00:29.569038 1970602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 14:00:31.075230 1970602 crio.go:462] duration metric: took 1.510942801s to copy over tarball
	I0120 14:00:31.075337 1970602 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 14:00:33.349175 1970602 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.273795072s)
	I0120 14:00:33.349213 1970602 crio.go:469] duration metric: took 2.273945524s to extract the tarball
	I0120 14:00:33.349221 1970602 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 14:00:33.388334 1970602 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:00:33.436326 1970602 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 14:00:33.436354 1970602 cache_images.go:84] Images are preloaded, skipping loading
	I0120 14:00:33.436362 1970602 kubeadm.go:934] updating node { 192.168.50.62 8443 v1.32.0 crio true true} ...
	I0120 14:00:33.436472 1970602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-647109 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:embed-certs-647109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 14:00:33.436545 1970602 ssh_runner.go:195] Run: crio config
	I0120 14:00:33.483056 1970602 cni.go:84] Creating CNI manager for ""
	I0120 14:00:33.483093 1970602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:00:33.483109 1970602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 14:00:33.483133 1970602 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.62 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-647109 NodeName:embed-certs-647109 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 14:00:33.483280 1970602 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-647109"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.62"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.62"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 14:00:33.483360 1970602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 14:00:33.494309 1970602 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 14:00:33.494403 1970602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 14:00:33.505552 1970602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0120 14:00:33.523307 1970602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 14:00:33.541137 1970602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I0120 14:00:33.559296 1970602 ssh_runner.go:195] Run: grep 192.168.50.62	control-plane.minikube.internal$ /etc/hosts
	I0120 14:00:33.563651 1970602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:00:33.577263 1970602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:00:33.720547 1970602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:00:33.737998 1970602 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109 for IP: 192.168.50.62
	I0120 14:00:33.738024 1970602 certs.go:194] generating shared ca certs ...
	I0120 14:00:33.738044 1970602 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:00:33.738259 1970602 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 14:00:33.738333 1970602 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 14:00:33.738346 1970602 certs.go:256] generating profile certs ...
	I0120 14:00:33.738481 1970602 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/client.key
	I0120 14:00:33.738565 1970602 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/apiserver.key.34f51781
	I0120 14:00:33.738659 1970602 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/proxy-client.key
	I0120 14:00:33.738791 1970602 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem (1338 bytes)
	W0120 14:00:33.738844 1970602 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672_empty.pem, impossibly tiny 0 bytes
	I0120 14:00:33.738861 1970602 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 14:00:33.738893 1970602 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 14:00:33.738928 1970602 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 14:00:33.738959 1970602 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 14:00:33.739017 1970602 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:00:33.739952 1970602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 14:00:33.777891 1970602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 14:00:33.810304 1970602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 14:00:33.846884 1970602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 14:00:33.884792 1970602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0120 14:00:33.916876 1970602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 14:00:33.951986 1970602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 14:00:33.979046 1970602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/embed-certs-647109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 14:00:34.006764 1970602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 14:00:34.034073 1970602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem --> /usr/share/ca-certificates/1927672.pem (1338 bytes)
	I0120 14:00:34.060340 1970602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /usr/share/ca-certificates/19276722.pem (1708 bytes)
	I0120 14:00:34.086626 1970602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 14:00:34.105014 1970602 ssh_runner.go:195] Run: openssl version
	I0120 14:00:34.111587 1970602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 14:00:34.123674 1970602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:00:34.128601 1970602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:00:34.128670 1970602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:00:34.135383 1970602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 14:00:34.147058 1970602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1927672.pem && ln -fs /usr/share/ca-certificates/1927672.pem /etc/ssl/certs/1927672.pem"
	I0120 14:00:34.158291 1970602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1927672.pem
	I0120 14:00:34.163542 1970602 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:58 /usr/share/ca-certificates/1927672.pem
	I0120 14:00:34.163627 1970602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1927672.pem
	I0120 14:00:34.169580 1970602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1927672.pem /etc/ssl/certs/51391683.0"
	I0120 14:00:34.181998 1970602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19276722.pem && ln -fs /usr/share/ca-certificates/19276722.pem /etc/ssl/certs/19276722.pem"
	I0120 14:00:34.194135 1970602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19276722.pem
	I0120 14:00:34.199187 1970602 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:58 /usr/share/ca-certificates/19276722.pem
	I0120 14:00:34.199264 1970602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19276722.pem
	I0120 14:00:34.205284 1970602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19276722.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 14:00:34.216950 1970602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 14:00:34.222160 1970602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 14:00:34.228897 1970602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 14:00:34.235362 1970602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 14:00:34.241783 1970602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 14:00:34.248423 1970602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 14:00:34.254941 1970602 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 14:00:34.261152 1970602 kubeadm.go:392] StartCluster: {Name:embed-certs-647109 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:embed-certs-647109 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:00:34.261249 1970602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 14:00:34.261364 1970602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:00:34.300375 1970602 cri.go:89] found id: ""
	I0120 14:00:34.300466 1970602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 14:00:34.310547 1970602 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 14:00:34.310571 1970602 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 14:00:34.310646 1970602 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 14:00:34.320772 1970602 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 14:00:34.321521 1970602 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-647109" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:00:34.321830 1970602 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-1920423/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-647109" cluster setting kubeconfig missing "embed-certs-647109" context setting]
	I0120 14:00:34.322373 1970602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:00:34.323849 1970602 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 14:00:34.333722 1970602 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.62
	I0120 14:00:34.333753 1970602 kubeadm.go:1160] stopping kube-system containers ...
	I0120 14:00:34.333766 1970602 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 14:00:34.333810 1970602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:00:34.371669 1970602 cri.go:89] found id: ""
	I0120 14:00:34.371737 1970602 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 14:00:34.387986 1970602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:00:34.398359 1970602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:00:34.398383 1970602 kubeadm.go:157] found existing configuration files:
	
	I0120 14:00:34.398439 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:00:34.408216 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:00:34.408301 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:00:34.418398 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:00:34.427819 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:00:34.427918 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:00:34.437834 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:00:34.447425 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:00:34.447490 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:00:34.457140 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:00:34.467612 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:00:34.467695 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:00:34.477872 1970602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:00:34.491513 1970602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:00:34.642223 1970602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:00:35.296915 1970602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:00:35.523836 1970602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:00:35.614395 1970602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:00:35.749849 1970602 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:00:35.749978 1970602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:00:36.250415 1970602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:00:36.750729 1970602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:00:37.250514 1970602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:00:37.265053 1970602 api_server.go:72] duration metric: took 1.515200403s to wait for apiserver process to appear ...
	I0120 14:00:37.265092 1970602 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:00:37.265119 1970602 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0120 14:00:39.558676 1970602 api_server.go:279] https://192.168.50.62:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 14:00:39.558710 1970602 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 14:00:39.558730 1970602 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0120 14:00:39.653337 1970602 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:00:39.653395 1970602 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:00:39.765732 1970602 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0120 14:00:39.776328 1970602 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:00:39.776375 1970602 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:00:40.265710 1970602 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0120 14:00:40.275965 1970602 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:00:40.276004 1970602 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:00:40.765655 1970602 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0120 14:00:40.772192 1970602 api_server.go:279] https://192.168.50.62:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:00:40.772218 1970602 api_server.go:103] status: https://192.168.50.62:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:00:41.265946 1970602 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0120 14:00:41.272642 1970602 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I0120 14:00:41.279630 1970602 api_server.go:141] control plane version: v1.32.0
	I0120 14:00:41.279671 1970602 api_server.go:131] duration metric: took 4.014570058s to wait for apiserver health ...
	I0120 14:00:41.279684 1970602 cni.go:84] Creating CNI manager for ""
	I0120 14:00:41.279693 1970602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:00:41.281532 1970602 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:00:41.282943 1970602 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:00:41.295206 1970602 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:00:41.318646 1970602 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:00:41.328032 1970602 system_pods.go:59] 8 kube-system pods found
	I0120 14:00:41.328081 1970602 system_pods.go:61] "coredns-668d6bf9bc-nxpkx" [d7fb9a10-0d3a-407a-be4e-f677bdeb2090] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 14:00:41.328091 1970602 system_pods.go:61] "etcd-embed-certs-647109" [d168e761-d96d-404a-87d7-ad0c867b9832] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 14:00:41.328099 1970602 system_pods.go:61] "kube-apiserver-embed-certs-647109" [68f13866-a1ea-4753-8a95-49c1a3b7e4f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 14:00:41.328105 1970602 system_pods.go:61] "kube-controller-manager-embed-certs-647109" [522692bd-794b-4131-8615-68d727292aad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0120 14:00:41.328122 1970602 system_pods.go:61] "kube-proxy-z2927" [9b3e32ee-0672-4a09-8be0-2137797bddb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0120 14:00:41.328137 1970602 system_pods.go:61] "kube-scheduler-embed-certs-647109" [9f8eb086-de19-4c13-9bb4-8a3becd461be] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0120 14:00:41.328144 1970602 system_pods.go:61] "metrics-server-f79f97bbb-gx5f6" [019e18ef-41b1-47e3-acfa-d3c0e7866082] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:00:41.328151 1970602 system_pods.go:61] "storage-provisioner" [2f99a506-381c-46fe-a42d-ea139297ffc8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 14:00:41.328166 1970602 system_pods.go:74] duration metric: took 9.48869ms to wait for pod list to return data ...
	I0120 14:00:41.328175 1970602 node_conditions.go:102] verifying NodePressure condition ...
	I0120 14:00:41.334732 1970602 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 14:00:41.334763 1970602 node_conditions.go:123] node cpu capacity is 2
	I0120 14:00:41.334779 1970602 node_conditions.go:105] duration metric: took 6.594695ms to run NodePressure ...
	I0120 14:00:41.334800 1970602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:00:41.635375 1970602 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0120 14:00:41.640160 1970602 kubeadm.go:739] kubelet initialised
	I0120 14:00:41.640193 1970602 kubeadm.go:740] duration metric: took 4.782878ms waiting for restarted kubelet to initialise ...
	I0120 14:00:41.640208 1970602 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:00:41.645185 1970602 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-nxpkx" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:43.653935 1970602 pod_ready.go:103] pod "coredns-668d6bf9bc-nxpkx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:46.154186 1970602 pod_ready.go:103] pod "coredns-668d6bf9bc-nxpkx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:48.651584 1970602 pod_ready.go:103] pod "coredns-668d6bf9bc-nxpkx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:49.238947 1970602 pod_ready.go:93] pod "coredns-668d6bf9bc-nxpkx" in "kube-system" namespace has status "Ready":"True"
	I0120 14:00:49.238986 1970602 pod_ready.go:82] duration metric: took 7.593771498s for pod "coredns-668d6bf9bc-nxpkx" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:49.239021 1970602 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:51.246413 1970602 pod_ready.go:103] pod "etcd-embed-certs-647109" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:53.749162 1970602 pod_ready.go:93] pod "etcd-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:00:53.749192 1970602 pod_ready.go:82] duration metric: took 4.510162385s for pod "etcd-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:53.749203 1970602 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:53.754526 1970602 pod_ready.go:93] pod "kube-apiserver-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:00:53.754565 1970602 pod_ready.go:82] duration metric: took 5.35396ms for pod "kube-apiserver-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:53.754579 1970602 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:53.758701 1970602 pod_ready.go:93] pod "kube-controller-manager-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:00:53.758721 1970602 pod_ready.go:82] duration metric: took 4.135392ms for pod "kube-controller-manager-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:53.758730 1970602 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-z2927" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:53.763381 1970602 pod_ready.go:93] pod "kube-proxy-z2927" in "kube-system" namespace has status "Ready":"True"
	I0120 14:00:53.763410 1970602 pod_ready.go:82] duration metric: took 4.67259ms for pod "kube-proxy-z2927" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:53.763422 1970602 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:55.772138 1970602 pod_ready.go:103] pod "kube-scheduler-embed-certs-647109" in "kube-system" namespace has status "Ready":"False"
	I0120 14:00:56.271557 1970602 pod_ready.go:93] pod "kube-scheduler-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:00:56.271587 1970602 pod_ready.go:82] duration metric: took 2.508156593s for pod "kube-scheduler-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:56.271602 1970602 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace to be "Ready" ...
	I0120 14:00:58.277629 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:00.279354 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:02.778426 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:05.279658 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:07.282219 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:09.778575 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:12.277842 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:14.277879 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:16.279040 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:18.778417 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:20.779603 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:22.783962 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:25.278554 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:27.278768 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:29.280435 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:31.281034 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:33.778046 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:35.778385 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:37.778939 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:39.779284 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:42.278570 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:44.778211 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:47.279184 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:49.280435 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:51.780700 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:54.280536 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:56.779010 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:58.779726 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:01.278692 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:03.777558 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:05.777988 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:08.278483 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:10.778869 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:12.782521 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:15.279029 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:17.281259 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:19.469918 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:21.779262 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:24.279782 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:26.777640 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:28.778330 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:30.783033 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:33.279302 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:35.282839 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:37.778057 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:39.778127 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:41.778931 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:44.284374 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:46.778063 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:49.277771 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:51.280318 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:53.778876 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:55.783020 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:58.280672 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:00.778133 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:02.778231 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:04.778368 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:06.778647 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:09.277769 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:11.279861 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:13.778783 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:16.279194 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:18.279451 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:20.281009 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:22.778157 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:25.279187 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:27.282700 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:29.778719 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:32.279358 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:34.778378 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:36.778557 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:39.278753 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:41.778436 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:44.278244 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:46.777269 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:48.777901 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:50.779106 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:53.278544 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:55.783799 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:58.278376 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:00.279152 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:02.778569 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:04.778659 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:06.780874 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:09.279024 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:11.279883 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:13.776738 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:15.778836 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:18.279084 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:20.778968 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:22.779314 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:25.279994 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:27.778659 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:29.780496 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:32.277638 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:34.279211 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:36.778511 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:39.279853 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:41.778833 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:43.779056 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:46.278851 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:48.279224 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:50.279663 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:52.280483 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:54.779036 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:56.272135 1970602 pod_ready.go:82] duration metric: took 4m0.000512351s for pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace to be "Ready" ...
	E0120 14:04:56.272179 1970602 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 14:04:56.272203 1970602 pod_ready.go:39] duration metric: took 4m14.631982517s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:04:56.272284 1970602 kubeadm.go:597] duration metric: took 4m21.961665482s to restartPrimaryControlPlane
	W0120 14:04:56.272373 1970602 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:04:56.272404 1970602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:05:24.178761 1970602 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.906332562s)
	I0120 14:05:24.178859 1970602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:05:24.194902 1970602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:05:24.206080 1970602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:05:24.217371 1970602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:05:24.217398 1970602 kubeadm.go:157] found existing configuration files:
	
	I0120 14:05:24.217448 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:05:24.227549 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:05:24.227627 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:05:24.238584 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:05:24.249016 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:05:24.249171 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:05:24.260537 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:05:24.270728 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:05:24.270792 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:05:24.281345 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:05:24.291266 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:05:24.291344 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:05:24.302258 1970602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:05:24.477322 1970602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:05:33.376770 1970602 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 14:05:33.376853 1970602 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:05:33.376989 1970602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:05:33.377149 1970602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:05:33.377293 1970602 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 14:05:33.377400 1970602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:05:33.378924 1970602 out.go:235]   - Generating certificates and keys ...
	I0120 14:05:33.379025 1970602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:05:33.379104 1970602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:05:33.379208 1970602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:05:33.379307 1970602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:05:33.379417 1970602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:05:33.379524 1970602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:05:33.379607 1970602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:05:33.379717 1970602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:05:33.379839 1970602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:05:33.379966 1970602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:05:33.380043 1970602 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:05:33.380129 1970602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:05:33.380198 1970602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:05:33.380268 1970602 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 14:05:33.380343 1970602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:05:33.380413 1970602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:05:33.380471 1970602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:05:33.380560 1970602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:05:33.380637 1970602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:05:33.382317 1970602 out.go:235]   - Booting up control plane ...
	I0120 14:05:33.382425 1970602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:05:33.382512 1970602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:05:33.382596 1970602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:05:33.382747 1970602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:05:33.382857 1970602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:05:33.382912 1970602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:05:33.383102 1970602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 14:05:33.383280 1970602 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 14:05:33.383370 1970602 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.354939ms
	I0120 14:05:33.383469 1970602 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 14:05:33.383558 1970602 kubeadm.go:310] [api-check] The API server is healthy after 5.504896351s
	I0120 14:05:33.383728 1970602 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 14:05:33.383925 1970602 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 14:05:33.384013 1970602 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 14:05:33.384335 1970602 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-647109 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 14:05:33.384423 1970602 kubeadm.go:310] [bootstrap-token] Using token: lua4mv.z68od0ysi19pmefo
	I0120 14:05:33.386221 1970602 out.go:235]   - Configuring RBAC rules ...
	I0120 14:05:33.386365 1970602 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 14:05:33.386446 1970602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 14:05:33.386593 1970602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 14:05:33.386761 1970602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 14:05:33.386926 1970602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 14:05:33.387058 1970602 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 14:05:33.387208 1970602 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 14:05:33.387276 1970602 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 14:05:33.387343 1970602 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 14:05:33.387355 1970602 kubeadm.go:310] 
	I0120 14:05:33.387441 1970602 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 14:05:33.387450 1970602 kubeadm.go:310] 
	I0120 14:05:33.387576 1970602 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 14:05:33.387589 1970602 kubeadm.go:310] 
	I0120 14:05:33.387627 1970602 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 14:05:33.387678 1970602 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 14:05:33.387738 1970602 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 14:05:33.387748 1970602 kubeadm.go:310] 
	I0120 14:05:33.387843 1970602 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 14:05:33.387853 1970602 kubeadm.go:310] 
	I0120 14:05:33.387930 1970602 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 14:05:33.387939 1970602 kubeadm.go:310] 
	I0120 14:05:33.388012 1970602 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 14:05:33.388091 1970602 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 14:05:33.388156 1970602 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 14:05:33.388160 1970602 kubeadm.go:310] 
	I0120 14:05:33.388249 1970602 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 14:05:33.388325 1970602 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 14:05:33.388332 1970602 kubeadm.go:310] 
	I0120 14:05:33.388404 1970602 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lua4mv.z68od0ysi19pmefo \
	I0120 14:05:33.388491 1970602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 \
	I0120 14:05:33.388524 1970602 kubeadm.go:310] 	--control-plane 
	I0120 14:05:33.388531 1970602 kubeadm.go:310] 
	I0120 14:05:33.388617 1970602 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 14:05:33.388625 1970602 kubeadm.go:310] 
	I0120 14:05:33.388736 1970602 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lua4mv.z68od0ysi19pmefo \
	I0120 14:05:33.388834 1970602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 
	I0120 14:05:33.388846 1970602 cni.go:84] Creating CNI manager for ""
	I0120 14:05:33.388853 1970602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:05:33.390876 1970602 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:05:33.392513 1970602 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:05:33.407354 1970602 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:05:33.428824 1970602 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:05:33.428934 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:33.428977 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-647109 minikube.k8s.io/updated_at=2025_01_20T14_05_33_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=embed-certs-647109 minikube.k8s.io/primary=true
	I0120 14:05:33.473138 1970602 ops.go:34] apiserver oom_adj: -16
	I0120 14:05:33.718712 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:34.218762 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:34.719381 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:35.219746 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:35.718888 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:36.218775 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:36.718813 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:37.219353 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:37.393979 1970602 kubeadm.go:1113] duration metric: took 3.965125255s to wait for elevateKubeSystemPrivileges
	I0120 14:05:37.394019 1970602 kubeadm.go:394] duration metric: took 5m3.132880668s to StartCluster
	I0120 14:05:37.394048 1970602 settings.go:142] acquiring lock: {Name:mka51395421b81550a6a78e164d8aa21a9ede347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:05:37.394150 1970602 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:05:37.396378 1970602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:05:37.396706 1970602 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:05:37.396823 1970602 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:05:37.396933 1970602 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-647109"
	I0120 14:05:37.396953 1970602 config.go:182] Loaded profile config "embed-certs-647109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:05:37.396970 1970602 addons.go:69] Setting metrics-server=true in profile "embed-certs-647109"
	I0120 14:05:37.396980 1970602 addons.go:238] Setting addon metrics-server=true in "embed-certs-647109"
	W0120 14:05:37.396988 1970602 addons.go:247] addon metrics-server should already be in state true
	I0120 14:05:37.396987 1970602 addons.go:69] Setting default-storageclass=true in profile "embed-certs-647109"
	I0120 14:05:37.396953 1970602 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-647109"
	I0120 14:05:37.397011 1970602 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-647109"
	W0120 14:05:37.397012 1970602 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:05:37.397041 1970602 host.go:66] Checking if "embed-certs-647109" exists ...
	I0120 14:05:37.397044 1970602 host.go:66] Checking if "embed-certs-647109" exists ...
	I0120 14:05:37.397479 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.397483 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.397495 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.397519 1970602 addons.go:69] Setting dashboard=true in profile "embed-certs-647109"
	I0120 14:05:37.397526 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.397532 1970602 addons.go:238] Setting addon dashboard=true in "embed-certs-647109"
	W0120 14:05:37.397539 1970602 addons.go:247] addon dashboard should already be in state true
	I0120 14:05:37.397563 1970602 host.go:66] Checking if "embed-certs-647109" exists ...
	I0120 14:05:37.397606 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.397785 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.397855 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.397900 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.401795 1970602 out.go:177] * Verifying Kubernetes components...
	I0120 14:05:37.403396 1970602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:05:37.419631 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0120 14:05:37.419631 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40515
	I0120 14:05:37.419751 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0120 14:05:37.420159 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.420340 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.420726 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.420753 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.420870 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.420883 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.421153 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.421286 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.421765 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.421807 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.421859 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.421907 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.423180 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.424356 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43445
	I0120 14:05:37.424853 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.427176 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.427218 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.431273 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.431306 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.431273 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.431590 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.431772 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.432414 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.432463 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.438218 1970602 addons.go:238] Setting addon default-storageclass=true in "embed-certs-647109"
	W0120 14:05:37.438363 1970602 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:05:37.438408 1970602 host.go:66] Checking if "embed-certs-647109" exists ...
	I0120 14:05:37.438859 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.439701 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.444146 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I0120 14:05:37.444576 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41243
	I0120 14:05:37.444773 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.444915 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.445334 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.445367 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.445548 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.445565 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.445846 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.445940 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.446010 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.446155 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.448263 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:05:37.448850 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:05:37.451121 1970602 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:05:37.451145 1970602 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:05:37.452901 1970602 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:05:37.452925 1970602 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:05:37.452946 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:05:37.453029 1970602 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:05:37.453046 1970602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:05:37.453066 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:05:37.457009 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.457306 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:05:37.457323 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.457535 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:05:37.457644 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.457758 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:05:37.457905 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:05:37.458015 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:05:37.458314 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:05:37.458329 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.458460 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:05:37.458637 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:05:37.458741 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:05:37.458835 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:05:37.465409 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44303
	I0120 14:05:37.466031 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.466695 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.466719 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.466964 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41927
	I0120 14:05:37.467498 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.467603 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.468062 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.468085 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.468561 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.468603 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.469079 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.469289 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.471308 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:05:37.473344 1970602 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:05:37.475133 1970602 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:05:37.476628 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:05:37.476660 1970602 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:05:37.476691 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:05:37.480284 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.480952 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:05:37.480993 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.481641 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:05:37.481944 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:05:37.482177 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:05:37.482403 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:05:37.509821 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I0120 14:05:37.510356 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.511017 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.511041 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.511533 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.511923 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.514239 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:05:37.514505 1970602 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:05:37.514525 1970602 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:05:37.514547 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:05:37.518318 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.518891 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:05:37.518919 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.519100 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:05:37.519331 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:05:37.519489 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:05:37.519722 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:05:37.741139 1970602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:05:37.799051 1970602 node_ready.go:35] waiting up to 6m0s for node "embed-certs-647109" to be "Ready" ...
	I0120 14:05:37.809096 1970602 node_ready.go:49] node "embed-certs-647109" has status "Ready":"True"
	I0120 14:05:37.809130 1970602 node_ready.go:38] duration metric: took 10.033158ms for node "embed-certs-647109" to be "Ready" ...
	I0120 14:05:37.809146 1970602 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:37.819590 1970602 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:37.940986 1970602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:05:37.994181 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:05:37.994215 1970602 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:05:38.057795 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:05:38.057828 1970602 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:05:38.074299 1970602 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:05:38.074328 1970602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:05:38.076399 1970602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:05:38.161099 1970602 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:05:38.161133 1970602 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:05:38.172032 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:05:38.172066 1970602 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:05:38.251253 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:05:38.251287 1970602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:05:38.267793 1970602 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:05:38.267823 1970602 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:05:38.300776 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:05:38.300806 1970602 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 14:05:38.438115 1970602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:05:38.438263 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:05:38.438293 1970602 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:05:38.469992 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:05:38.470026 1970602 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:05:38.488178 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:38.488209 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:38.488602 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:38.488624 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:38.488633 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:38.488642 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:38.488642 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:38.488915 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:38.488928 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:38.506460 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:38.506490 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:38.506908 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:38.506932 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:38.506952 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:38.535768 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:05:38.535801 1970602 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 14:05:38.588204 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:05:38.588244 1970602 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 14:05:38.641430 1970602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:05:39.322794 1970602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.24634872s)
	I0120 14:05:39.322872 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:39.322888 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:39.323266 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:39.323312 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:39.323332 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:39.323342 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:39.323351 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:39.323616 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:39.323623 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:39.323633 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:39.850519 1970602 pod_ready.go:103] pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:40.002690 1970602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.564518983s)
	I0120 14:05:40.002772 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:40.002791 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:40.003274 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:40.003336 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:40.003360 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:40.003372 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:40.003382 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:40.003762 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:40.003779 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:40.003791 1970602 addons.go:479] Verifying addon metrics-server=true in "embed-certs-647109"
	I0120 14:05:40.003823 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:40.923510 1970602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.282025528s)
	I0120 14:05:40.923577 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:40.923608 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:40.923936 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:40.923983 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:40.924000 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:40.924023 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:40.924034 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:40.924348 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:40.924369 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:40.924375 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:40.926492 1970602 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-647109 addons enable metrics-server
	
	I0120 14:05:40.928141 1970602 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 14:05:40.930035 1970602 addons.go:514] duration metric: took 3.533222189s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 14:05:42.330147 1970602 pod_ready.go:103] pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:43.342012 1970602 pod_ready.go:93] pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.342038 1970602 pod_ready.go:82] duration metric: took 5.522419293s for pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.342050 1970602 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-ndv97" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.359479 1970602 pod_ready.go:93] pod "coredns-668d6bf9bc-ndv97" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.359506 1970602 pod_ready.go:82] duration metric: took 17.448444ms for pod "coredns-668d6bf9bc-ndv97" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.359518 1970602 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.403702 1970602 pod_ready.go:93] pod "etcd-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.403732 1970602 pod_ready.go:82] duration metric: took 44.20711ms for pod "etcd-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.403744 1970602 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.413596 1970602 pod_ready.go:93] pod "kube-apiserver-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.413623 1970602 pod_ready.go:82] duration metric: took 9.873022ms for pod "kube-apiserver-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.413634 1970602 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.421693 1970602 pod_ready.go:93] pod "kube-controller-manager-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.421718 1970602 pod_ready.go:82] duration metric: took 8.077458ms for pod "kube-controller-manager-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.421731 1970602 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-chhpt" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.724510 1970602 pod_ready.go:93] pod "kube-proxy-chhpt" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.724537 1970602 pod_ready.go:82] duration metric: took 302.799519ms for pod "kube-proxy-chhpt" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.724549 1970602 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:45.324683 1970602 pod_ready.go:93] pod "kube-scheduler-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:45.324712 1970602 pod_ready.go:82] duration metric: took 1.600155124s for pod "kube-scheduler-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:45.324723 1970602 pod_ready.go:39] duration metric: took 7.515564286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:45.324743 1970602 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:05:45.324813 1970602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:45.381331 1970602 api_server.go:72] duration metric: took 7.98457351s to wait for apiserver process to appear ...
	I0120 14:05:45.381368 1970602 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:05:45.381388 1970602 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0120 14:05:45.386523 1970602 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I0120 14:05:45.387477 1970602 api_server.go:141] control plane version: v1.32.0
	I0120 14:05:45.387504 1970602 api_server.go:131] duration metric: took 6.127764ms to wait for apiserver health ...
	I0120 14:05:45.387513 1970602 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:05:45.530457 1970602 system_pods.go:59] 9 kube-system pods found
	I0120 14:05:45.530502 1970602 system_pods.go:61] "coredns-668d6bf9bc-ndbzp" [d43c588e-6fc1-435b-9c9a-8b19201596ae] Running
	I0120 14:05:45.530510 1970602 system_pods.go:61] "coredns-668d6bf9bc-ndv97" [3298cf5d-5983-463b-8aca-792fa1d94241] Running
	I0120 14:05:45.530516 1970602 system_pods.go:61] "etcd-embed-certs-647109" [58f40005-bda9-4a38-8e2a-8e3f4a869c20] Running
	I0120 14:05:45.530521 1970602 system_pods.go:61] "kube-apiserver-embed-certs-647109" [8e188c16-1d56-4972-baf1-20d8dd10f440] Running
	I0120 14:05:45.530527 1970602 system_pods.go:61] "kube-controller-manager-embed-certs-647109" [691924af-9adb-4788-9104-0dcca6ee95f3] Running
	I0120 14:05:45.530532 1970602 system_pods.go:61] "kube-proxy-chhpt" [a0244020-668f-4700-85c2-9562f4d0c920] Running
	I0120 14:05:45.530537 1970602 system_pods.go:61] "kube-scheduler-embed-certs-647109" [6b42ab84-e4cb-4dc8-a4ad-e7da476ec3b2] Running
	I0120 14:05:45.530548 1970602 system_pods.go:61] "metrics-server-f79f97bbb-nqwxp" [68d39045-4c01-40a2-9e8f-0f7734838f0b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:05:45.530559 1970602 system_pods.go:61] "storage-provisioner" [8067c033-4ef4-4945-95b5-f4120df75f5c] Running
	I0120 14:05:45.530574 1970602 system_pods.go:74] duration metric: took 143.054434ms to wait for pod list to return data ...
	I0120 14:05:45.530587 1970602 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:05:45.727314 1970602 default_sa.go:45] found service account: "default"
	I0120 14:05:45.727359 1970602 default_sa.go:55] duration metric: took 196.759471ms for default service account to be created ...
	I0120 14:05:45.727373 1970602 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:05:45.927406 1970602 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-647109 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-647109 -n embed-certs-647109
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-647109 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-647109 logs -n 25: (1.659846114s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-648067                                   | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-191446        | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-727256  | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC | 20 Jan 25 13:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC | 20 Jan 25 14:01 UTC |
	|         | default-k8s-diff-port-727256                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-647109                 | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 14:00 UTC | 20 Jan 25 14:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-647109                                  | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 14:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-191446                              | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC | 20 Jan 25 14:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-191446             | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC | 20 Jan 25 14:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-191446                              | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-727256       | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC | 20 Jan 25 14:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC |                     |
	|         | default-k8s-diff-port-727256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-191446                              | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:25 UTC | 20 Jan 25 14:25 UTC |
	| start   | -p newest-cni-345509 --memory=2200 --alsologtostderr   | newest-cni-345509            | jenkins | v1.35.0 | 20 Jan 25 14:25 UTC | 20 Jan 25 14:26 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-648067                                   | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 14:25 UTC | 20 Jan 25 14:25 UTC |
	| start   | -p auto-798303 --memory=3072                           | auto-798303                  | jenkins | v1.35.0 | 20 Jan 25 14:25 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-345509             | newest-cni-345509            | jenkins | v1.35.0 | 20 Jan 25 14:26 UTC | 20 Jan 25 14:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-345509                                   | newest-cni-345509            | jenkins | v1.35.0 | 20 Jan 25 14:26 UTC | 20 Jan 25 14:26 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-345509                  | newest-cni-345509            | jenkins | v1.35.0 | 20 Jan 25 14:26 UTC | 20 Jan 25 14:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-345509 --memory=2200 --alsologtostderr   | newest-cni-345509            | jenkins | v1.35.0 | 20 Jan 25 14:26 UTC | 20 Jan 25 14:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-345509 image list                           | newest-cni-345509            | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-345509                                   | newest-cni-345509            | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-345509                                   | newest-cni-345509            | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-345509                                   | newest-cni-345509            | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	| delete  | -p newest-cni-345509                                   | newest-cni-345509            | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	| start   | -p kindnet-798303                                      | kindnet-798303               | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 14:27:05
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 14:27:05.704502 1979051 out.go:345] Setting OutFile to fd 1 ...
	I0120 14:27:05.704782 1979051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:27:05.704793 1979051 out.go:358] Setting ErrFile to fd 2...
	I0120 14:27:05.704800 1979051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:27:05.705023 1979051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 14:27:05.705641 1979051 out.go:352] Setting JSON to false
	I0120 14:27:05.706741 1979051 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22172,"bootTime":1737361054,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 14:27:05.706862 1979051 start.go:139] virtualization: kvm guest
	I0120 14:27:05.709061 1979051 out.go:177] * [kindnet-798303] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 14:27:05.711041 1979051 notify.go:220] Checking for updates...
	I0120 14:27:05.711122 1979051 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 14:27:05.712568 1979051 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 14:27:05.713925 1979051 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:27:05.715193 1979051 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 14:27:05.716480 1979051 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 14:27:05.717821 1979051 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 14:27:05.720162 1979051 config.go:182] Loaded profile config "auto-798303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:27:05.720277 1979051 config.go:182] Loaded profile config "default-k8s-diff-port-727256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:27:05.720385 1979051 config.go:182] Loaded profile config "embed-certs-647109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:27:05.720491 1979051 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 14:27:05.762453 1979051 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 14:27:05.763804 1979051 start.go:297] selected driver: kvm2
	I0120 14:27:05.763819 1979051 start.go:901] validating driver "kvm2" against <nil>
	I0120 14:27:05.763832 1979051 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 14:27:05.764607 1979051 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:27:05.764718 1979051 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-1920423/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 14:27:05.781608 1979051 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 14:27:05.781684 1979051 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 14:27:05.782035 1979051 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 14:27:05.782078 1979051 cni.go:84] Creating CNI manager for "kindnet"
	I0120 14:27:05.782096 1979051 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0120 14:27:05.782161 1979051 start.go:340] cluster config:
	{Name:kindnet-798303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kindnet-798303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0120 14:27:05.782297 1979051 iso.go:125] acquiring lock: {Name:mk18c81b0efa5cc8efe9d47dc52684752f206ae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:27:05.784317 1979051 out.go:177] * Starting "kindnet-798303" primary control-plane node in "kindnet-798303" cluster
	I0120 14:27:05.785539 1979051 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 14:27:05.785583 1979051 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 14:27:05.785593 1979051 cache.go:56] Caching tarball of preloaded images
	I0120 14:27:05.785699 1979051 preload.go:172] Found /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 14:27:05.785709 1979051 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 14:27:05.785795 1979051 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kindnet-798303/config.json ...
	I0120 14:27:05.785813 1979051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/kindnet-798303/config.json: {Name:mk2bff02cd925ff7460c94774775d637b8e33838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:27:05.785980 1979051 start.go:360] acquireMachinesLock for kindnet-798303: {Name:mkbf148c6fec0c722eed081be3cef9d0990100c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 14:27:05.786042 1979051 start.go:364] duration metric: took 33.15µs to acquireMachinesLock for "kindnet-798303"
	I0120 14:27:05.786063 1979051 start.go:93] Provisioning new machine with config: &{Name:kindnet-798303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kindnet-798303 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:27:05.786115 1979051 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.834892904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383226834858666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b116fdc-1e6e-463e-91e3-41c7e983cea1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.835558616Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab41c780-20e0-401d-9d0b-068383619de7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.835639480Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab41c780-20e0-401d-9d0b-068383619de7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.838153734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c45060ad19dfe2151ec21756809529a203e73453d03393bdc9342e7a896c01a4,PodSandboxId:86a6ec3999385481184ae043699e0fbf812589c05ed17eb798cc59d03b85b3ee,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737383205768373245,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-zrncv,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 94b29827-2a96-43a9-a464-257731edcfe1,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a65a0579898a6d576e612d9540379fd5c11fe655b92cd818aa71df73f3f1a7,PodSandboxId:37170b61cab4baf3fc5644688f8ae211934682eb84733d68e1fc6db1d88c9518,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737381952603249064,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-h8fnb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: ca97d4da-f961-46d0-9080-de751403c1b1,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5368f1767b6bb95e3719510f63cae192500ebdaafc55b5932b35a942128238a9,PodSandboxId:8764e1823a79c0110dc753a4a07d1843bdeaec16b2e9d98dc4b3ed04faa4ecbe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737381940192277294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8067c033-4ef4-4945-95b5-f4120df75f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c3d0b9728dbe73f77edb1b227606518787e4a801bf83f64ba199e6f4cdb0fe,PodSandboxId:2491c0a9534caf2e849d3fd7b4e83e10ce86407994ec0569b9928027016f83c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737381939146956061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-ndbzp,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: d43c588e-6fc1-435b-9c9a-8b19201596ae,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3ab5b69b431a5bcb9807e856776ee8a7be54260d0a98f63c336cb81fb7ea877,PodSandboxId:13d412a60a870b7c54b1eb5f0125bd263b5896c2d3d3d28b69f56ce8a408193c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737381939021025593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-ndv97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3298cf5d-5983-463b-8aca-792fa1d94241,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9cdb5bb5615bae73116081c389c9dce6d640844305b0e8567b75eff415a0ec,PodSandboxId:68301978ec20fbf62f5210a9fde5ef56964da382d590fa4b8e80a27d3531112d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737381937586953781,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chhpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0244020-668f-4700-85c2-9562f4d0c920,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95013808e72c8e011c62ea3104b843df8676adadb840cd22ac43dfd742c2412e,PodSandboxId:e33e74b07ec3bb24b2596d88ad0ffd54650a739ad34c4116374c60fde232dbf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8
510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737381927045692353,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08036f7035643015725523a144d41de1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6e72a04c05b5ed7cb1e69170b44a46fcf1163db0cf7d3cea17edf5e4bd7a1e,PodSandboxId:ae46a01c1bbf79ad914225e1b40e0148a6fe14688db89efc44ac92f5af58e9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bd
e617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737381926987260820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88c654c5ea03296879aac09a7f463b79,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde8bad4bc9c48b277f5f9e7b88ff1308ab7ff7ae0390eece7da45a7526dc659,PodSandboxId:f9223feb8dee69d88801516d514beb4d7b019d4aaab1325b3e18df2ef3f42efc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c203475
8e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737381927016721799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3663a131e15718f79ef21bced51d45aa,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a85edd3b065078a84f92cadfd1ffe15050772ecdbc0214c8c1d0f883a448f6,PodSandboxId:e1436d25cc374165696ca4924327e16ff81c8389e8313f22b08634b4b8c28fb9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737381926970948231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65ae762ebc5bf4814531c462e6f9427,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9d39a07dfeb303f2bc7287240b43a8f39bc77875e039ce18d2122bd370884c,PodSandboxId:3f10e3b055fead0bdb5f086ddd59d8ef65fba25f78cf1efaed2b94b73850ae27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737381636597614687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3663a131e15718f79ef21bced51d45aa,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab41c780-20e0-401d-9d0b-068383619de7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.885301036Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5da9b6c8-bdb0-45a2-9a81-593eae418657 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.885454421Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5da9b6c8-bdb0-45a2-9a81-593eae418657 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.886842286Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70779778-28c0-4fa0-93ae-af68d1140cf6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.887295285Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383226887271629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70779778-28c0-4fa0-93ae-af68d1140cf6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.888352819Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2eb69c89-9252-4f75-ab5d-841d74143e8e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.888548222Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2eb69c89-9252-4f75-ab5d-841d74143e8e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.888818404Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c45060ad19dfe2151ec21756809529a203e73453d03393bdc9342e7a896c01a4,PodSandboxId:86a6ec3999385481184ae043699e0fbf812589c05ed17eb798cc59d03b85b3ee,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737383205768373245,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-zrncv,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 94b29827-2a96-43a9-a464-257731edcfe1,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a65a0579898a6d576e612d9540379fd5c11fe655b92cd818aa71df73f3f1a7,PodSandboxId:37170b61cab4baf3fc5644688f8ae211934682eb84733d68e1fc6db1d88c9518,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737381952603249064,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-h8fnb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: ca97d4da-f961-46d0-9080-de751403c1b1,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5368f1767b6bb95e3719510f63cae192500ebdaafc55b5932b35a942128238a9,PodSandboxId:8764e1823a79c0110dc753a4a07d1843bdeaec16b2e9d98dc4b3ed04faa4ecbe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737381940192277294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8067c033-4ef4-4945-95b5-f4120df75f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c3d0b9728dbe73f77edb1b227606518787e4a801bf83f64ba199e6f4cdb0fe,PodSandboxId:2491c0a9534caf2e849d3fd7b4e83e10ce86407994ec0569b9928027016f83c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737381939146956061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-ndbzp,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: d43c588e-6fc1-435b-9c9a-8b19201596ae,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3ab5b69b431a5bcb9807e856776ee8a7be54260d0a98f63c336cb81fb7ea877,PodSandboxId:13d412a60a870b7c54b1eb5f0125bd263b5896c2d3d3d28b69f56ce8a408193c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737381939021025593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-ndv97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3298cf5d-5983-463b-8aca-792fa1d94241,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9cdb5bb5615bae73116081c389c9dce6d640844305b0e8567b75eff415a0ec,PodSandboxId:68301978ec20fbf62f5210a9fde5ef56964da382d590fa4b8e80a27d3531112d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737381937586953781,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chhpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0244020-668f-4700-85c2-9562f4d0c920,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95013808e72c8e011c62ea3104b843df8676adadb840cd22ac43dfd742c2412e,PodSandboxId:e33e74b07ec3bb24b2596d88ad0ffd54650a739ad34c4116374c60fde232dbf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8
510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737381927045692353,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08036f7035643015725523a144d41de1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6e72a04c05b5ed7cb1e69170b44a46fcf1163db0cf7d3cea17edf5e4bd7a1e,PodSandboxId:ae46a01c1bbf79ad914225e1b40e0148a6fe14688db89efc44ac92f5af58e9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bd
e617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737381926987260820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88c654c5ea03296879aac09a7f463b79,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde8bad4bc9c48b277f5f9e7b88ff1308ab7ff7ae0390eece7da45a7526dc659,PodSandboxId:f9223feb8dee69d88801516d514beb4d7b019d4aaab1325b3e18df2ef3f42efc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c203475
8e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737381927016721799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3663a131e15718f79ef21bced51d45aa,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a85edd3b065078a84f92cadfd1ffe15050772ecdbc0214c8c1d0f883a448f6,PodSandboxId:e1436d25cc374165696ca4924327e16ff81c8389e8313f22b08634b4b8c28fb9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737381926970948231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65ae762ebc5bf4814531c462e6f9427,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9d39a07dfeb303f2bc7287240b43a8f39bc77875e039ce18d2122bd370884c,PodSandboxId:3f10e3b055fead0bdb5f086ddd59d8ef65fba25f78cf1efaed2b94b73850ae27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737381636597614687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3663a131e15718f79ef21bced51d45aa,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2eb69c89-9252-4f75-ab5d-841d74143e8e name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.925180847Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f80c370e-a46b-4d32-afd0-78e58a6cf93e name=/runtime.v1.RuntimeService/Version
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.925271362Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f80c370e-a46b-4d32-afd0-78e58a6cf93e name=/runtime.v1.RuntimeService/Version
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.927642542Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51481221-bfde-4abb-a601-19ab835b9d03 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.928622039Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383226928597391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51481221-bfde-4abb-a601-19ab835b9d03 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.929211810Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b121043-ad33-4543-88e1-264813d3e8df name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.929282496Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b121043-ad33-4543-88e1-264813d3e8df name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.929612632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c45060ad19dfe2151ec21756809529a203e73453d03393bdc9342e7a896c01a4,PodSandboxId:86a6ec3999385481184ae043699e0fbf812589c05ed17eb798cc59d03b85b3ee,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737383205768373245,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-zrncv,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 94b29827-2a96-43a9-a464-257731edcfe1,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a65a0579898a6d576e612d9540379fd5c11fe655b92cd818aa71df73f3f1a7,PodSandboxId:37170b61cab4baf3fc5644688f8ae211934682eb84733d68e1fc6db1d88c9518,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737381952603249064,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-h8fnb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: ca97d4da-f961-46d0-9080-de751403c1b1,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5368f1767b6bb95e3719510f63cae192500ebdaafc55b5932b35a942128238a9,PodSandboxId:8764e1823a79c0110dc753a4a07d1843bdeaec16b2e9d98dc4b3ed04faa4ecbe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737381940192277294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8067c033-4ef4-4945-95b5-f4120df75f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c3d0b9728dbe73f77edb1b227606518787e4a801bf83f64ba199e6f4cdb0fe,PodSandboxId:2491c0a9534caf2e849d3fd7b4e83e10ce86407994ec0569b9928027016f83c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737381939146956061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-ndbzp,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: d43c588e-6fc1-435b-9c9a-8b19201596ae,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3ab5b69b431a5bcb9807e856776ee8a7be54260d0a98f63c336cb81fb7ea877,PodSandboxId:13d412a60a870b7c54b1eb5f0125bd263b5896c2d3d3d28b69f56ce8a408193c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737381939021025593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-ndv97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3298cf5d-5983-463b-8aca-792fa1d94241,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9cdb5bb5615bae73116081c389c9dce6d640844305b0e8567b75eff415a0ec,PodSandboxId:68301978ec20fbf62f5210a9fde5ef56964da382d590fa4b8e80a27d3531112d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737381937586953781,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chhpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0244020-668f-4700-85c2-9562f4d0c920,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95013808e72c8e011c62ea3104b843df8676adadb840cd22ac43dfd742c2412e,PodSandboxId:e33e74b07ec3bb24b2596d88ad0ffd54650a739ad34c4116374c60fde232dbf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8
510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737381927045692353,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08036f7035643015725523a144d41de1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6e72a04c05b5ed7cb1e69170b44a46fcf1163db0cf7d3cea17edf5e4bd7a1e,PodSandboxId:ae46a01c1bbf79ad914225e1b40e0148a6fe14688db89efc44ac92f5af58e9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bd
e617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737381926987260820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88c654c5ea03296879aac09a7f463b79,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde8bad4bc9c48b277f5f9e7b88ff1308ab7ff7ae0390eece7da45a7526dc659,PodSandboxId:f9223feb8dee69d88801516d514beb4d7b019d4aaab1325b3e18df2ef3f42efc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c203475
8e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737381927016721799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3663a131e15718f79ef21bced51d45aa,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a85edd3b065078a84f92cadfd1ffe15050772ecdbc0214c8c1d0f883a448f6,PodSandboxId:e1436d25cc374165696ca4924327e16ff81c8389e8313f22b08634b4b8c28fb9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737381926970948231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65ae762ebc5bf4814531c462e6f9427,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9d39a07dfeb303f2bc7287240b43a8f39bc77875e039ce18d2122bd370884c,PodSandboxId:3f10e3b055fead0bdb5f086ddd59d8ef65fba25f78cf1efaed2b94b73850ae27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737381636597614687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3663a131e15718f79ef21bced51d45aa,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b121043-ad33-4543-88e1-264813d3e8df name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.973799642Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a6bfae34-fffd-448b-804a-3dc4dfc38c34 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.973906181Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6bfae34-fffd-448b-804a-3dc4dfc38c34 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.975142715Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aefc8425-450e-44b6-a402-1921affbc8a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.975726708Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383226975703957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aefc8425-450e-44b6-a402-1921affbc8a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.976493841Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e0ebbe9-b955-44c5-9a2e-85ecf449f76c name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.976559575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e0ebbe9-b955-44c5-9a2e-85ecf449f76c name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:27:06 embed-certs-647109 crio[730]: time="2025-01-20 14:27:06.976814135Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c45060ad19dfe2151ec21756809529a203e73453d03393bdc9342e7a896c01a4,PodSandboxId:86a6ec3999385481184ae043699e0fbf812589c05ed17eb798cc59d03b85b3ee,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:9,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737383205768373245,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-zrncv,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 94b29827-2a96-43a9-a464-257731edcfe1,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a65a0579898a6d576e612d9540379fd5c11fe655b92cd818aa71df73f3f1a7,PodSandboxId:37170b61cab4baf3fc5644688f8ae211934682eb84733d68e1fc6db1d88c9518,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737381952603249064,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-h8fnb,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubern
etes.pod.uid: ca97d4da-f961-46d0-9080-de751403c1b1,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5368f1767b6bb95e3719510f63cae192500ebdaafc55b5932b35a942128238a9,PodSandboxId:8764e1823a79c0110dc753a4a07d1843bdeaec16b2e9d98dc4b3ed04faa4ecbe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737381940192277294,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: sto
rage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8067c033-4ef4-4945-95b5-f4120df75f5c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8c3d0b9728dbe73f77edb1b227606518787e4a801bf83f64ba199e6f4cdb0fe,PodSandboxId:2491c0a9534caf2e849d3fd7b4e83e10ce86407994ec0569b9928027016f83c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737381939146956061,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-ndbzp,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: d43c588e-6fc1-435b-9c9a-8b19201596ae,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3ab5b69b431a5bcb9807e856776ee8a7be54260d0a98f63c336cb81fb7ea877,PodSandboxId:13d412a60a870b7c54b1eb5f0125bd263b5896c2d3d3d28b69f56ce8a408193c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567
591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737381939021025593,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-ndv97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3298cf5d-5983-463b-8aca-792fa1d94241,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac9cdb5bb5615bae73116081c389c9dce6d640844305b0e8567b75eff415a0ec,PodSandboxId:68301978ec20fbf62f5210a9fde5ef56964da382d590fa4b8e80a27d3531112d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Im
age:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737381937586953781,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-chhpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0244020-668f-4700-85c2-9562f4d0c920,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95013808e72c8e011c62ea3104b843df8676adadb840cd22ac43dfd742c2412e,PodSandboxId:e33e74b07ec3bb24b2596d88ad0ffd54650a739ad34c4116374c60fde232dbf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8
510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737381927045692353,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08036f7035643015725523a144d41de1,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6e72a04c05b5ed7cb1e69170b44a46fcf1163db0cf7d3cea17edf5e4bd7a1e,PodSandboxId:ae46a01c1bbf79ad914225e1b40e0148a6fe14688db89efc44ac92f5af58e9d5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bd
e617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737381926987260820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88c654c5ea03296879aac09a7f463b79,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde8bad4bc9c48b277f5f9e7b88ff1308ab7ff7ae0390eece7da45a7526dc659,PodSandboxId:f9223feb8dee69d88801516d514beb4d7b019d4aaab1325b3e18df2ef3f42efc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c203475
8e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737381927016721799,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3663a131e15718f79ef21bced51d45aa,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3a85edd3b065078a84f92cadfd1ffe15050772ecdbc0214c8c1d0f883a448f6,PodSandboxId:e1436d25cc374165696ca4924327e16ff81c8389e8313f22b08634b4b8c28fb9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737381926970948231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f65ae762ebc5bf4814531c462e6f9427,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9d39a07dfeb303f2bc7287240b43a8f39bc77875e039ce18d2122bd370884c,PodSandboxId:3f10e3b055fead0bdb5f086ddd59d8ef65fba25f78cf1efaed2b94b73850ae27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737381636597614687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-647109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3663a131e15718f79ef21bced51d45aa,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e0ebbe9-b955-44c5-9a2e-85ecf449f76c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	c45060ad19dfe       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   9                   86a6ec3999385       dashboard-metrics-scraper-86c6bf9756-zrncv
	e3a65a0579898       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   21 minutes ago      Running             kubernetes-dashboard        0                   37170b61cab4b       kubernetes-dashboard-7779f9b69b-h8fnb
	5368f1767b6bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 minutes ago      Running             storage-provisioner         0                   8764e1823a79c       storage-provisioner
	a8c3d0b9728db       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   2491c0a9534ca       coredns-668d6bf9bc-ndbzp
	d3ab5b69b431a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   13d412a60a870       coredns-668d6bf9bc-ndv97
	ac9cdb5bb5615       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08                                           21 minutes ago      Running             kube-proxy                  0                   68301978ec20f       kube-proxy-chhpt
	95013808e72c8       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           21 minutes ago      Running             etcd                        2                   e33e74b07ec3b       etcd-embed-certs-647109
	fde8bad4bc9c4       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                           21 minutes ago      Running             kube-apiserver              2                   f9223feb8dee6       kube-apiserver-embed-certs-647109
	bc6e72a04c05b       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3                                           21 minutes ago      Running             kube-controller-manager     2                   ae46a01c1bbf7       kube-controller-manager-embed-certs-647109
	c3a85edd3b065       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5                                           21 minutes ago      Running             kube-scheduler              2                   e1436d25cc374       kube-scheduler-embed-certs-647109
	9f9d39a07dfeb       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                           26 minutes ago      Exited              kube-apiserver              1                   3f10e3b055fea       kube-apiserver-embed-certs-647109
	
	
	==> coredns [a8c3d0b9728dbe73f77edb1b227606518787e4a801bf83f64ba199e6f4cdb0fe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d3ab5b69b431a5bcb9807e856776ee8a7be54260d0a98f63c336cb81fb7ea877] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-647109
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-647109
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3
	                    minikube.k8s.io/name=embed-certs-647109
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T14_05_33_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 14:05:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-647109
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 14:26:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 14:24:16 +0000   Mon, 20 Jan 2025 14:05:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 14:24:16 +0000   Mon, 20 Jan 2025 14:05:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 14:24:16 +0000   Mon, 20 Jan 2025 14:05:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 14:24:16 +0000   Mon, 20 Jan 2025 14:05:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.62
	  Hostname:    embed-certs-647109
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 87146b12b250491c99c19c28ca23b73f
	  System UUID:                87146b12-b250-491c-99c1-9c28ca23b73f
	  Boot ID:                    4972766b-6fac-4611-abf2-b14b3b458867
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-ndbzp                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-ndv97                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-647109                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-647109             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-647109    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-chhpt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-647109             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-nqwxp                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-zrncv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-h8fnb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-647109 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-647109 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-647109 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-647109 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-647109 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-647109 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-647109 event: Registered Node embed-certs-647109 in Controller
	  Normal  CIDRAssignmentFailed     21m                cidrAllocator    Node embed-certs-647109 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +0.041851] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.069741] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.001115] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.549269] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.078426] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.067482] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056389] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.192961] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.147396] systemd-fstab-generator[691]: Ignoring "noauto" option for root device
	[  +0.293198] systemd-fstab-generator[721]: Ignoring "noauto" option for root device
	[  +4.435711] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.069832] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.723211] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +5.573708] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.074981] kauditd_printk_skb: 83 callbacks suppressed
	[Jan20 14:05] kauditd_printk_skb: 3 callbacks suppressed
	[ +13.576101] systemd-fstab-generator[2683]: Ignoring "noauto" option for root device
	[  +4.580295] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.997457] systemd-fstab-generator[3027]: Ignoring "noauto" option for root device
	[  +4.991887] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.173474] systemd-fstab-generator[3191]: Ignoring "noauto" option for root device
	[  +4.859463] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.803610] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [95013808e72c8e011c62ea3104b843df8676adadb840cd22ac43dfd742c2412e] <==
	{"level":"info","ts":"2025-01-20T14:05:27.976997Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-20T14:05:28.011785Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-20T14:05:28.012799Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.62:2379"}
	{"level":"info","ts":"2025-01-20T14:05:50.892232Z","caller":"traceutil/trace.go:171","msg":"trace[635522004] linearizableReadLoop","detail":"{readStateIndex:569; appliedIndex:568; }","duration":"195.110764ms","start":"2025-01-20T14:05:50.695697Z","end":"2025-01-20T14:05:50.890808Z","steps":["trace[635522004] 'read index received'  (duration: 194.937857ms)","trace[635522004] 'applied index is now lower than readState.Index'  (duration: 172.334µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T14:05:50.891736Z","caller":"traceutil/trace.go:171","msg":"trace[153504318] transaction","detail":"{read_only:false; response_revision:554; number_of_response:1; }","duration":"213.600863ms","start":"2025-01-20T14:05:50.677513Z","end":"2025-01-20T14:05:50.891114Z","steps":["trace[153504318] 'process raft request'  (duration: 213.162787ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T14:05:50.895020Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.927639ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T14:05:50.895290Z","caller":"traceutil/trace.go:171","msg":"trace[434941004] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:554; }","duration":"199.600362ms","start":"2025-01-20T14:05:50.695666Z","end":"2025-01-20T14:05:50.895266Z","steps":["trace[434941004] 'agreement among raft nodes before linearized reading'  (duration: 197.914218ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T14:05:50.895028Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.674287ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2025-01-20T14:05:50.895333Z","caller":"traceutil/trace.go:171","msg":"trace[2072241395] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:554; }","duration":"104.703665ms","start":"2025-01-20T14:05:50.790612Z","end":"2025-01-20T14:05:50.895316Z","steps":["trace[2072241395] 'agreement among raft nodes before linearized reading'  (duration: 103.609475ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T14:15:28.039749Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":883}
	{"level":"info","ts":"2025-01-20T14:15:28.082848Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":883,"took":"42.637101ms","hash":2129590008,"current-db-size-bytes":3072000,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":3072000,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2025-01-20T14:15:28.082962Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2129590008,"revision":883,"compact-revision":-1}
	{"level":"info","ts":"2025-01-20T14:20:28.050876Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1134}
	{"level":"info","ts":"2025-01-20T14:20:28.057000Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1134,"took":"5.611739ms","hash":189476508,"current-db-size-bytes":3072000,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1798144,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-20T14:20:28.057048Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":189476508,"revision":1134,"compact-revision":883}
	{"level":"info","ts":"2025-01-20T14:25:28.062145Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1387}
	{"level":"info","ts":"2025-01-20T14:25:28.066860Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1387,"took":"4.244862ms","hash":1235482132,"current-db-size-bytes":3072000,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1839104,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-20T14:25:28.066929Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1235482132,"revision":1387,"compact-revision":1134}
	{"level":"info","ts":"2025-01-20T14:25:58.632318Z","caller":"traceutil/trace.go:171","msg":"trace[1705768999] transaction","detail":"{read_only:false; response_revision:1664; number_of_response:1; }","duration":"131.734386ms","start":"2025-01-20T14:25:58.500528Z","end":"2025-01-20T14:25:58.632262Z","steps":["trace[1705768999] 'process raft request'  (duration: 131.231752ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T14:25:59.018821Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.141353ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T14:25:59.018925Z","caller":"traceutil/trace.go:171","msg":"trace[1978730207] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1664; }","duration":"204.376653ms","start":"2025-01-20T14:25:58.814528Z","end":"2025-01-20T14:25:59.018905Z","steps":["trace[1978730207] 'range keys from in-memory index tree'  (duration: 204.080854ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T14:26:26.262011Z","caller":"traceutil/trace.go:171","msg":"trace[455906708] transaction","detail":"{read_only:false; response_revision:1686; number_of_response:1; }","duration":"212.987383ms","start":"2025-01-20T14:26:26.048991Z","end":"2025-01-20T14:26:26.261979Z","steps":["trace[455906708] 'process raft request'  (duration: 212.492094ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-20T14:26:50.703916Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.962072ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T14:26:50.704018Z","caller":"traceutil/trace.go:171","msg":"trace[663774777] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1707; }","duration":"169.121113ms","start":"2025-01-20T14:26:50.534877Z","end":"2025-01-20T14:26:50.703999Z","steps":["trace[663774777] 'range keys from in-memory index tree'  (duration: 168.873441ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T14:26:52.017030Z","caller":"traceutil/trace.go:171","msg":"trace[132576034] transaction","detail":"{read_only:false; response_revision:1708; number_of_response:1; }","duration":"203.120346ms","start":"2025-01-20T14:26:51.813896Z","end":"2025-01-20T14:26:52.017016Z","steps":["trace[132576034] 'process raft request'  (duration: 202.859503ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:27:07 up 26 min,  0 users,  load average: 0.31, 0.31, 0.26
	Linux embed-certs-647109 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9f9d39a07dfeb303f2bc7287240b43a8f39bc77875e039ce18d2122bd370884c] <==
	W0120 14:05:22.462860       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:22.542695       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:22.636987       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:22.663063       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:22.861147       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:22.922148       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:22.923486       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:22.952207       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:22.961832       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:23.018333       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:23.036492       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:23.090492       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:23.120005       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:23.136793       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:23.138205       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:23.185222       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:23.217980       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:23.269849       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:23.349489       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:23.403522       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:23.415165       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:23.418821       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:23.433808       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:23.444939       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:05:23.476851       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [fde8bad4bc9c48b277f5f9e7b88ff1308ab7ff7ae0390eece7da45a7526dc659] <==
	I0120 14:23:30.795130       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 14:23:30.795178       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 14:25:29.794733       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:25:29.795112       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0120 14:25:30.797222       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:25:30.797388       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0120 14:25:30.797251       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:25:30.797533       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0120 14:25:30.798650       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 14:25:30.798681       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 14:26:30.799203       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:26:30.799593       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0120 14:26:30.799204       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:26:30.799752       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0120 14:26:30.800841       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 14:26:30.800900       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [bc6e72a04c05b5ed7cb1e69170b44a46fcf1163db0cf7d3cea17edf5e4bd7a1e] <==
	I0120 14:22:06.614378       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:22:36.548457       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:22:36.625052       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:23:06.557071       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:23:06.636636       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:23:36.563360       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:23:36.647172       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:24:06.570935       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:24:06.659230       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 14:24:16.866109       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-647109"
	E0120 14:24:36.577509       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:24:36.668053       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:25:06.584782       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:25:06.677188       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:25:36.592918       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:25:36.687314       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:26:06.601240       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:26:06.695477       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:26:36.609980       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:26:36.706019       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 14:26:46.313791       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="465.033µs"
	I0120 14:26:54.533731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="898.586µs"
	I0120 14:26:58.765097       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="100.77µs"
	E0120 14:27:06.619608       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:27:06.713926       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [ac9cdb5bb5615bae73116081c389c9dce6d640844305b0e8567b75eff415a0ec] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 14:05:38.055587       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 14:05:38.076583       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.62"]
	E0120 14:05:38.076808       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 14:05:38.159728       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 14:05:38.159778       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 14:05:38.159842       1 server_linux.go:170] "Using iptables Proxier"
	I0120 14:05:38.165815       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 14:05:38.166225       1 server.go:497] "Version info" version="v1.32.0"
	I0120 14:05:38.166304       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 14:05:38.168000       1 config.go:199] "Starting service config controller"
	I0120 14:05:38.168030       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 14:05:38.168059       1 config.go:105] "Starting endpoint slice config controller"
	I0120 14:05:38.168064       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 14:05:38.168703       1 config.go:329] "Starting node config controller"
	I0120 14:05:38.168712       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 14:05:38.269087       1 shared_informer.go:320] Caches are synced for node config
	I0120 14:05:38.269143       1 shared_informer.go:320] Caches are synced for service config
	I0120 14:05:38.269155       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c3a85edd3b065078a84f92cadfd1ffe15050772ecdbc0214c8c1d0f883a448f6] <==
	E0120 14:05:29.790490       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0120 14:05:29.790506       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:29.790092       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0120 14:05:29.790656       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:30.610991       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0120 14:05:30.611131       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:30.634214       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0120 14:05:30.634272       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:30.709685       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 14:05:30.709748       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:30.712502       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 14:05:30.712563       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:30.802758       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0120 14:05:30.802817       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:30.839329       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0120 14:05:30.839387       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:30.901309       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0120 14:05:30.901381       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:31.025162       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 14:05:31.025233       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0120 14:05:31.090370       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 14:05:31.090474       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:05:31.115250       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 14:05:31.115323       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0120 14:05:33.481109       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 14:26:32 embed-certs-647109 kubelet[3034]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 20 14:26:32 embed-certs-647109 kubelet[3034]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 20 14:26:32 embed-certs-647109 kubelet[3034]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 20 14:26:32 embed-certs-647109 kubelet[3034]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 20 14:26:33 embed-certs-647109 kubelet[3034]: E0120 14:26:33.234307    3034 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383193233866263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:26:33 embed-certs-647109 kubelet[3034]: E0120 14:26:33.234346    3034 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383193233866263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:26:43 embed-certs-647109 kubelet[3034]: E0120 14:26:43.236352    3034 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383203235644258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:26:43 embed-certs-647109 kubelet[3034]: E0120 14:26:43.236460    3034 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383203235644258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:26:44 embed-certs-647109 kubelet[3034]: E0120 14:26:44.791980    3034 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 20 14:26:44 embed-certs-647109 kubelet[3034]: E0120 14:26:44.793091    3034 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 20 14:26:44 embed-certs-647109 kubelet[3034]: E0120 14:26:44.793476    3034 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fndp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-nqwxp_kube-system(68d39045-4c01-40a2-9e8f-0f7734838f0b): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 20 14:26:44 embed-certs-647109 kubelet[3034]: E0120 14:26:44.794828    3034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-nqwxp" podUID="68d39045-4c01-40a2-9e8f-0f7734838f0b"
	Jan 20 14:26:45 embed-certs-647109 kubelet[3034]: I0120 14:26:45.743628    3034 scope.go:117] "RemoveContainer" containerID="a7691d7fa176429ecd1c594d1ed2c7bd1ce62141a3d33ff482ba961af876ac00"
	Jan 20 14:26:46 embed-certs-647109 kubelet[3034]: I0120 14:26:46.282222    3034 scope.go:117] "RemoveContainer" containerID="a7691d7fa176429ecd1c594d1ed2c7bd1ce62141a3d33ff482ba961af876ac00"
	Jan 20 14:26:46 embed-certs-647109 kubelet[3034]: I0120 14:26:46.282835    3034 scope.go:117] "RemoveContainer" containerID="c45060ad19dfe2151ec21756809529a203e73453d03393bdc9342e7a896c01a4"
	Jan 20 14:26:46 embed-certs-647109 kubelet[3034]: E0120 14:26:46.283090    3034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-zrncv_kubernetes-dashboard(94b29827-2a96-43a9-a464-257731edcfe1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-zrncv" podUID="94b29827-2a96-43a9-a464-257731edcfe1"
	Jan 20 14:26:53 embed-certs-647109 kubelet[3034]: E0120 14:26:53.239565    3034 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383213238712671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:26:53 embed-certs-647109 kubelet[3034]: E0120 14:26:53.239878    3034 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383213238712671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:26:54 embed-certs-647109 kubelet[3034]: I0120 14:26:54.515319    3034 scope.go:117] "RemoveContainer" containerID="c45060ad19dfe2151ec21756809529a203e73453d03393bdc9342e7a896c01a4"
	Jan 20 14:26:54 embed-certs-647109 kubelet[3034]: E0120 14:26:54.515567    3034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-zrncv_kubernetes-dashboard(94b29827-2a96-43a9-a464-257731edcfe1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-zrncv" podUID="94b29827-2a96-43a9-a464-257731edcfe1"
	Jan 20 14:26:58 embed-certs-647109 kubelet[3034]: E0120 14:26:58.749607    3034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-nqwxp" podUID="68d39045-4c01-40a2-9e8f-0f7734838f0b"
	Jan 20 14:27:03 embed-certs-647109 kubelet[3034]: E0120 14:27:03.241553    3034 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383223241200526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:27:03 embed-certs-647109 kubelet[3034]: E0120 14:27:03.241602    3034 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383223241200526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:27:07 embed-certs-647109 kubelet[3034]: I0120 14:27:07.743092    3034 scope.go:117] "RemoveContainer" containerID="c45060ad19dfe2151ec21756809529a203e73453d03393bdc9342e7a896c01a4"
	Jan 20 14:27:07 embed-certs-647109 kubelet[3034]: E0120 14:27:07.743355    3034 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-zrncv_kubernetes-dashboard(94b29827-2a96-43a9-a464-257731edcfe1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-zrncv" podUID="94b29827-2a96-43a9-a464-257731edcfe1"
	
	
	==> kubernetes-dashboard [e3a65a0579898a6d576e612d9540379fd5c11fe655b92cd818aa71df73f3f1a7] <==
	2025/01/20 14:14:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:15:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:15:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:16:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:16:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:17:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:17:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:18:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:18:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:19:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:19:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:20:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:20:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:21:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:21:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:22:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:22:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:23:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:23:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:24:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:24:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:25:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:25:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:26:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:26:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5368f1767b6bb95e3719510f63cae192500ebdaafc55b5932b35a942128238a9] <==
	I0120 14:05:40.442991       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 14:05:40.581526       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 14:05:40.581750       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 14:05:40.625249       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2df2b18d-9315-4deb-9a84-f951cee54bfa", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-647109_5796a78f-a6ae-43d5-9e44-d1d4889daf03 became leader
	I0120 14:05:40.633503       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 14:05:40.633775       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-647109_5796a78f-a6ae-43d5-9e44-d1d4889daf03!
	I0120 14:05:40.737106       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-647109_5796a78f-a6ae-43d5-9e44-d1d4889daf03!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-647109 -n embed-certs-647109
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-647109 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-nqwxp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-647109 describe pod metrics-server-f79f97bbb-nqwxp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-647109 describe pod metrics-server-f79f97bbb-nqwxp: exit status 1 (73.186299ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-nqwxp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-647109 describe pod metrics-server-f79f97bbb-nqwxp: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (1624.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (511.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-191446 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-191446 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m28.288699119s)

                                                
                                                
-- stdout --
	* [old-k8s-version-191446] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-191446" primary control-plane node in "old-k8s-version-191446" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-191446" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 14:01:27.427355 1971155 out.go:345] Setting OutFile to fd 1 ...
	I0120 14:01:27.427672 1971155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:01:27.427684 1971155 out.go:358] Setting ErrFile to fd 2...
	I0120 14:01:27.427689 1971155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:01:27.427861 1971155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 14:01:27.428469 1971155 out.go:352] Setting JSON to false
	I0120 14:01:27.429515 1971155 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":20633,"bootTime":1737361054,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 14:01:27.429638 1971155 start.go:139] virtualization: kvm guest
	I0120 14:01:27.432957 1971155 out.go:177] * [old-k8s-version-191446] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 14:01:27.434416 1971155 notify.go:220] Checking for updates...
	I0120 14:01:27.434436 1971155 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 14:01:27.435888 1971155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 14:01:27.437110 1971155 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:01:27.438375 1971155 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 14:01:27.439683 1971155 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 14:01:27.440973 1971155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 14:01:27.442815 1971155 config.go:182] Loaded profile config "old-k8s-version-191446": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 14:01:27.443461 1971155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:01:27.443523 1971155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:01:27.459336 1971155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I0120 14:01:27.459958 1971155 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:01:27.460551 1971155 main.go:141] libmachine: Using API Version  1
	I0120 14:01:27.460579 1971155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:01:27.460983 1971155 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:01:27.461193 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:27.462890 1971155 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I0120 14:01:27.464089 1971155 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 14:01:27.464460 1971155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:01:27.464515 1971155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:01:27.480078 1971155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40901
	I0120 14:01:27.480738 1971155 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:01:27.481318 1971155 main.go:141] libmachine: Using API Version  1
	I0120 14:01:27.481346 1971155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:01:27.481690 1971155 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:01:27.481876 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:27.521052 1971155 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 14:01:27.522260 1971155 start.go:297] selected driver: kvm2
	I0120 14:01:27.522277 1971155 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-191446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-1
91446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:01:27.522391 1971155 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 14:01:27.523126 1971155 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:01:27.523220 1971155 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-1920423/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 14:01:27.540400 1971155 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 14:01:27.540811 1971155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 14:01:27.540848 1971155 cni.go:84] Creating CNI manager for ""
	I0120 14:01:27.540908 1971155 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:01:27.540968 1971155 start.go:340] cluster config:
	{Name:old-k8s-version-191446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:01:27.541092 1971155 iso.go:125] acquiring lock: {Name:mk18c81b0efa5cc8efe9d47dc52684752f206ae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:01:27.542939 1971155 out.go:177] * Starting "old-k8s-version-191446" primary control-plane node in "old-k8s-version-191446" cluster
	I0120 14:01:27.544315 1971155 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 14:01:27.544367 1971155 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0120 14:01:27.544377 1971155 cache.go:56] Caching tarball of preloaded images
	I0120 14:01:27.544510 1971155 preload.go:172] Found /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 14:01:27.544521 1971155 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0120 14:01:27.544613 1971155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/config.json ...
	I0120 14:01:27.544803 1971155 start.go:360] acquireMachinesLock for old-k8s-version-191446: {Name:mkbf148c6fec0c722eed081be3cef9d0990100c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 14:01:27.544848 1971155 start.go:364] duration metric: took 25.31µs to acquireMachinesLock for "old-k8s-version-191446"
	I0120 14:01:27.544864 1971155 start.go:96] Skipping create...Using existing machine configuration
	I0120 14:01:27.544871 1971155 fix.go:54] fixHost starting: 
	I0120 14:01:27.545128 1971155 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:01:27.545160 1971155 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:01:27.561033 1971155 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35989
	I0120 14:01:27.561456 1971155 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:01:27.562040 1971155 main.go:141] libmachine: Using API Version  1
	I0120 14:01:27.562071 1971155 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:01:27.562479 1971155 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:01:27.562798 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:27.562973 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetState
	I0120 14:01:27.564797 1971155 fix.go:112] recreateIfNeeded on old-k8s-version-191446: state=Stopped err=<nil>
	I0120 14:01:27.564833 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	W0120 14:01:27.564996 1971155 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 14:01:27.566761 1971155 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-191446" ...
	I0120 14:01:27.567956 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .Start
	I0120 14:01:27.568241 1971155 main.go:141] libmachine: (old-k8s-version-191446) starting domain...
	I0120 14:01:27.568273 1971155 main.go:141] libmachine: (old-k8s-version-191446) ensuring networks are active...
	I0120 14:01:27.569283 1971155 main.go:141] libmachine: (old-k8s-version-191446) Ensuring network default is active
	I0120 14:01:27.569742 1971155 main.go:141] libmachine: (old-k8s-version-191446) Ensuring network mk-old-k8s-version-191446 is active
	I0120 14:01:27.570107 1971155 main.go:141] libmachine: (old-k8s-version-191446) getting domain XML...
	I0120 14:01:27.570794 1971155 main.go:141] libmachine: (old-k8s-version-191446) creating domain...
	I0120 14:01:28.844259 1971155 main.go:141] libmachine: (old-k8s-version-191446) waiting for IP...
	I0120 14:01:28.845169 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:28.845736 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:28.845869 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:28.845749 1971190 retry.go:31] will retry after 249.093991ms: waiting for domain to come up
	I0120 14:01:29.096266 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:29.096835 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:29.096870 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:29.096778 1971190 retry.go:31] will retry after 285.937419ms: waiting for domain to come up
	I0120 14:01:29.384654 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:29.385227 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:29.385260 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:29.385184 1971190 retry.go:31] will retry after 403.444594ms: waiting for domain to come up
	I0120 14:01:29.789819 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:29.790466 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:29.790516 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:29.790442 1971190 retry.go:31] will retry after 525.904837ms: waiting for domain to come up
	I0120 14:01:30.361342 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:30.361758 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:30.361799 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:30.361742 1971190 retry.go:31] will retry after 498.844656ms: waiting for domain to come up
	I0120 14:01:30.862672 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:30.863328 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:30.863359 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:30.863284 1971190 retry.go:31] will retry after 695.176765ms: waiting for domain to come up
	I0120 14:01:31.559994 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:31.560418 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:31.560483 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:31.560423 1971190 retry.go:31] will retry after 1.138767233s: waiting for domain to come up
	I0120 14:01:32.700822 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:32.701293 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:32.701323 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:32.701238 1971190 retry.go:31] will retry after 1.039348308s: waiting for domain to come up
	I0120 14:01:33.742152 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:33.742798 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:33.742827 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:33.742756 1971190 retry.go:31] will retry after 1.487881975s: waiting for domain to come up
	I0120 14:01:35.232385 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:35.232903 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:35.233000 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:35.232883 1971190 retry.go:31] will retry after 1.541170209s: waiting for domain to come up
	I0120 14:01:36.775877 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:36.776558 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:36.776586 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:36.776513 1971190 retry.go:31] will retry after 2.896053576s: waiting for domain to come up
	I0120 14:01:39.675363 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:39.675986 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:39.676021 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:39.675945 1971190 retry.go:31] will retry after 3.105341623s: waiting for domain to come up
	I0120 14:01:42.783450 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:42.783953 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:42.783979 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:42.783919 1971190 retry.go:31] will retry after 3.216558184s: waiting for domain to come up
	I0120 14:01:46.001813 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.002358 1971155 main.go:141] libmachine: (old-k8s-version-191446) found domain IP: 192.168.61.215
	I0120 14:01:46.002386 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has current primary IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.002392 1971155 main.go:141] libmachine: (old-k8s-version-191446) reserving static IP address...
	I0120 14:01:46.002890 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "old-k8s-version-191446", mac: "52:54:00:87:83:fb", ip: "192.168.61.215"} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.002913 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | skip adding static IP to network mk-old-k8s-version-191446 - found existing host DHCP lease matching {name: "old-k8s-version-191446", mac: "52:54:00:87:83:fb", ip: "192.168.61.215"}
	I0120 14:01:46.002961 1971155 main.go:141] libmachine: (old-k8s-version-191446) reserved static IP address 192.168.61.215 for domain old-k8s-version-191446
	I0120 14:01:46.003012 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | Getting to WaitForSSH function...
	I0120 14:01:46.003029 1971155 main.go:141] libmachine: (old-k8s-version-191446) waiting for SSH...
	I0120 14:01:46.005479 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.005822 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.005844 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.005930 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | Using SSH client type: external
	I0120 14:01:46.005974 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa (-rw-------)
	I0120 14:01:46.006012 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 14:01:46.006030 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | About to run SSH command:
	I0120 14:01:46.006042 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | exit 0
	I0120 14:01:46.134861 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | SSH cmd err, output: <nil>: 
	I0120 14:01:46.135287 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetConfigRaw
	I0120 14:01:46.135993 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 14:01:46.138498 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.138913 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.138949 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.139408 1971155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/config.json ...
	I0120 14:01:46.139628 1971155 machine.go:93] provisionDockerMachine start ...
	I0120 14:01:46.139648 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:46.139910 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.142776 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.143168 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.143196 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.143377 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.143551 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.143710 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.143884 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.144084 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:46.144287 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:46.144299 1971155 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 14:01:46.259874 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 14:01:46.259909 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetMachineName
	I0120 14:01:46.260184 1971155 buildroot.go:166] provisioning hostname "old-k8s-version-191446"
	I0120 14:01:46.260218 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetMachineName
	I0120 14:01:46.260442 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.263109 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.263469 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.263498 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.263608 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.263809 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.263964 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.264115 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.264263 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:46.264566 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:46.264598 1971155 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-191446 && echo "old-k8s-version-191446" | sudo tee /etc/hostname
	I0120 14:01:46.390733 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-191446
	
	I0120 14:01:46.390778 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.394086 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.394452 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.394495 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.394665 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.394902 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.395120 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.395312 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.395484 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:46.395721 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:46.395742 1971155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-191446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-191446/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-191446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 14:01:46.517398 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 14:01:46.517429 1971155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 14:01:46.517474 1971155 buildroot.go:174] setting up certificates
	I0120 14:01:46.517489 1971155 provision.go:84] configureAuth start
	I0120 14:01:46.517501 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetMachineName
	I0120 14:01:46.517852 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 14:01:46.520852 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.521243 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.521276 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.521419 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.523721 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.524173 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.524216 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.524323 1971155 provision.go:143] copyHostCerts
	I0120 14:01:46.524385 1971155 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem, removing ...
	I0120 14:01:46.524406 1971155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem
	I0120 14:01:46.524505 1971155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 14:01:46.524641 1971155 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem, removing ...
	I0120 14:01:46.524653 1971155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem
	I0120 14:01:46.524681 1971155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 14:01:46.524749 1971155 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem, removing ...
	I0120 14:01:46.524756 1971155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem
	I0120 14:01:46.524777 1971155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 14:01:46.524823 1971155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-191446 san=[127.0.0.1 192.168.61.215 localhost minikube old-k8s-version-191446]
	I0120 14:01:46.780575 1971155 provision.go:177] copyRemoteCerts
	I0120 14:01:46.780653 1971155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 14:01:46.780692 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.783791 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.784174 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.784204 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.784390 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.784667 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.784947 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.785129 1971155 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 14:01:46.873537 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 14:01:46.906323 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 14:01:46.934595 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 14:01:46.963136 1971155 provision.go:87] duration metric: took 445.630599ms to configureAuth
	I0120 14:01:46.963175 1971155 buildroot.go:189] setting minikube options for container-runtime
	I0120 14:01:46.963391 1971155 config.go:182] Loaded profile config "old-k8s-version-191446": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 14:01:46.963495 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.966539 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.966917 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.966953 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.967102 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.967316 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.967488 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.967694 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.967860 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:46.968110 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:46.968140 1971155 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 14:01:47.221729 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 14:01:47.221758 1971155 machine.go:96] duration metric: took 1.082115997s to provisionDockerMachine
	I0120 14:01:47.221770 1971155 start.go:293] postStartSetup for "old-k8s-version-191446" (driver="kvm2")
	I0120 14:01:47.221780 1971155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 14:01:47.221801 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.222156 1971155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 14:01:47.222213 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:47.225564 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.226024 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.226063 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.226226 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:47.226479 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.226678 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:47.226841 1971155 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 14:01:47.315044 1971155 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 14:01:47.319600 1971155 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 14:01:47.319630 1971155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 14:01:47.319699 1971155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 14:01:47.319785 1971155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem -> 19276722.pem in /etc/ssl/certs
	I0120 14:01:47.319880 1971155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 14:01:47.331251 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:01:47.359102 1971155 start.go:296] duration metric: took 137.311216ms for postStartSetup
	I0120 14:01:47.359156 1971155 fix.go:56] duration metric: took 19.814283548s for fixHost
	I0120 14:01:47.359184 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:47.362176 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.362643 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.362680 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.362916 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:47.363161 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.363352 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.363520 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:47.363693 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:47.363932 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:47.363948 1971155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 14:01:47.480011 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381707.434903722
	
	I0120 14:01:47.480050 1971155 fix.go:216] guest clock: 1737381707.434903722
	I0120 14:01:47.480061 1971155 fix.go:229] Guest: 2025-01-20 14:01:47.434903722 +0000 UTC Remote: 2025-01-20 14:01:47.359160605 +0000 UTC m=+19.980745135 (delta=75.743117ms)
	I0120 14:01:47.480090 1971155 fix.go:200] guest clock delta is within tolerance: 75.743117ms
	I0120 14:01:47.480098 1971155 start.go:83] releasing machines lock for "old-k8s-version-191446", held for 19.935238773s
	I0120 14:01:47.480132 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.480450 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 14:01:47.483367 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.483792 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.483828 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.483945 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.484435 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.484606 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.484699 1971155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 14:01:47.484761 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:47.484899 1971155 ssh_runner.go:195] Run: cat /version.json
	I0120 14:01:47.484929 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:47.487568 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.487980 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.488011 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.488093 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.488211 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:47.488434 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.488591 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:47.488630 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.488653 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.488741 1971155 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 14:01:47.488862 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:47.489009 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.489153 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:47.489343 1971155 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 14:01:47.608326 1971155 ssh_runner.go:195] Run: systemctl --version
	I0120 14:01:47.614709 1971155 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 14:01:47.772139 1971155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 14:01:47.780427 1971155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 14:01:47.780502 1971155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 14:01:47.798266 1971155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 14:01:47.798304 1971155 start.go:495] detecting cgroup driver to use...
	I0120 14:01:47.798398 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 14:01:47.815867 1971155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 14:01:47.835855 1971155 docker.go:217] disabling cri-docker service (if available) ...
	I0120 14:01:47.835918 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 14:01:47.853481 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 14:01:47.869379 1971155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 14:01:47.988401 1971155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 14:01:48.193315 1971155 docker.go:233] disabling docker service ...
	I0120 14:01:48.193390 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 14:01:48.214201 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 14:01:48.230964 1971155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 14:01:48.377733 1971155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 14:01:48.516198 1971155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 14:01:48.533486 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 14:01:48.557115 1971155 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0120 14:01:48.557197 1971155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:01:48.570080 1971155 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 14:01:48.570162 1971155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:01:48.584225 1971155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:01:48.596995 1971155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:01:48.609663 1971155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 14:01:48.623942 1971155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 14:01:48.637099 1971155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 14:01:48.637171 1971155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 14:01:48.653873 1971155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 14:01:48.666171 1971155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:01:48.807308 1971155 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 14:01:48.914634 1971155 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 14:01:48.914731 1971155 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 14:01:48.920471 1971155 start.go:563] Will wait 60s for crictl version
	I0120 14:01:48.920558 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:48.924644 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 14:01:48.966008 1971155 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 14:01:48.966111 1971155 ssh_runner.go:195] Run: crio --version
	I0120 14:01:48.995639 1971155 ssh_runner.go:195] Run: crio --version
	I0120 14:01:49.031088 1971155 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0120 14:01:49.032269 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 14:01:49.035945 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:49.036382 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:49.036423 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:49.036733 1971155 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0120 14:01:49.041470 1971155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:01:49.055442 1971155 kubeadm.go:883] updating cluster {Name:old-k8s-version-191446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 14:01:49.055654 1971155 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 14:01:49.055738 1971155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:01:49.111537 1971155 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 14:01:49.111603 1971155 ssh_runner.go:195] Run: which lz4
	I0120 14:01:49.116646 1971155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 14:01:49.121632 1971155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 14:01:49.121670 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0120 14:01:51.019564 1971155 crio.go:462] duration metric: took 1.902969728s to copy over tarball
	I0120 14:01:51.019668 1971155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 14:01:54.192207 1971155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.172482213s)
	I0120 14:01:54.192247 1971155 crio.go:469] duration metric: took 3.172642787s to extract the tarball
	I0120 14:01:54.192257 1971155 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 14:01:54.241548 1971155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:01:54.283118 1971155 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 14:01:54.283147 1971155 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 14:01:54.283222 1971155 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:01:54.283246 1971155 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.283292 1971155 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.283311 1971155 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.283369 1971155 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0120 14:01:54.283369 1971155 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.283370 1971155 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.283429 1971155 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.285174 1971155 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:01:54.285194 1971155 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.285222 1971155 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.285232 1971155 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.285484 1971155 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.285533 1971155 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.285551 1971155 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0120 14:01:54.285520 1971155 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.443508 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.451962 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.459320 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.478139 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.482365 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.490130 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.491742 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0120 14:01:54.535842 1971155 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0120 14:01:54.535930 1971155 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.536008 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.556510 1971155 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0120 14:01:54.556563 1971155 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.556617 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.604701 1971155 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0120 14:01:54.604747 1971155 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.604801 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.648817 1971155 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0120 14:01:54.648847 1971155 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0120 14:01:54.648872 1971155 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.648887 1971155 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.648932 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.648951 1971155 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0120 14:01:54.648986 1971155 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.649059 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.648932 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.662210 1971155 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0120 14:01:54.662265 1971155 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0120 14:01:54.662271 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.662303 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.662304 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.662392 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.662403 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.666373 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.666427 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.784739 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:01:54.815550 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.815585 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 14:01:54.815637 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.815650 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.820367 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.820421 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.820459 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:55.000111 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:55.000218 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 14:01:55.013244 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:55.013276 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 14:01:55.013348 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:55.013372 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:55.015126 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:55.144073 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0120 14:01:55.144169 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0120 14:01:55.175966 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 14:01:55.175984 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0120 14:01:55.179810 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0120 14:01:55.179835 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0120 14:01:55.180076 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0120 14:01:55.216565 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0120 14:01:55.216646 1971155 cache_images.go:92] duration metric: took 933.479899ms to LoadCachedImages
	W0120 14:01:55.216768 1971155 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0120 14:01:55.216789 1971155 kubeadm.go:934] updating node { 192.168.61.215 8443 v1.20.0 crio true true} ...
	I0120 14:01:55.216907 1971155 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-191446 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 14:01:55.216973 1971155 ssh_runner.go:195] Run: crio config
	I0120 14:01:55.272348 1971155 cni.go:84] Creating CNI manager for ""
	I0120 14:01:55.272377 1971155 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:01:55.272387 1971155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 14:01:55.272407 1971155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.215 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-191446 NodeName:old-k8s-version-191446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 14:01:55.272581 1971155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-191446"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 14:01:55.272661 1971155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 14:01:55.285452 1971155 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 14:01:55.285532 1971155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 14:01:55.300604 1971155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0120 14:01:55.321434 1971155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 14:01:55.339855 1971155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0120 14:01:55.360605 1971155 ssh_runner.go:195] Run: grep 192.168.61.215	control-plane.minikube.internal$ /etc/hosts
	I0120 14:01:55.364977 1971155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:01:55.380053 1971155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:01:55.499744 1971155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:01:55.518232 1971155 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446 for IP: 192.168.61.215
	I0120 14:01:55.518267 1971155 certs.go:194] generating shared ca certs ...
	I0120 14:01:55.518300 1971155 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:01:55.518512 1971155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 14:01:55.518553 1971155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 14:01:55.518563 1971155 certs.go:256] generating profile certs ...
	I0120 14:01:55.571153 1971155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.key
	I0120 14:01:55.571288 1971155 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.key.d5f4b946
	I0120 14:01:55.571350 1971155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.key
	I0120 14:01:55.571517 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem (1338 bytes)
	W0120 14:01:55.571559 1971155 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672_empty.pem, impossibly tiny 0 bytes
	I0120 14:01:55.571570 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 14:01:55.571606 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 14:01:55.571641 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 14:01:55.571671 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 14:01:55.571733 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:01:55.572624 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 14:01:55.613349 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 14:01:55.645837 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 14:01:55.688637 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 14:01:55.736949 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 14:01:55.786459 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 14:01:55.833912 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 14:01:55.861615 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 14:01:55.891303 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem --> /usr/share/ca-certificates/1927672.pem (1338 bytes)
	I0120 14:01:55.920272 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /usr/share/ca-certificates/19276722.pem (1708 bytes)
	I0120 14:01:55.947553 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 14:01:55.979159 1971155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 14:01:56.002476 1971155 ssh_runner.go:195] Run: openssl version
	I0120 14:01:56.011075 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19276722.pem && ln -fs /usr/share/ca-certificates/19276722.pem /etc/ssl/certs/19276722.pem"
	I0120 14:01:56.026823 1971155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19276722.pem
	I0120 14:01:56.033320 1971155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:58 /usr/share/ca-certificates/19276722.pem
	I0120 14:01:56.033404 1971155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19276722.pem
	I0120 14:01:56.041787 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19276722.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 14:01:56.055968 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 14:01:56.072918 1971155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:01:56.078642 1971155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:01:56.078744 1971155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:01:56.085416 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 14:01:56.101948 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1927672.pem && ln -fs /usr/share/ca-certificates/1927672.pem /etc/ssl/certs/1927672.pem"
	I0120 14:01:56.117742 1971155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1927672.pem
	I0120 14:01:56.123020 1971155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:58 /usr/share/ca-certificates/1927672.pem
	I0120 14:01:56.123086 1971155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1927672.pem
	I0120 14:01:56.129661 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1927672.pem /etc/ssl/certs/51391683.0"
	I0120 14:01:56.142113 1971155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 14:01:56.147841 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 14:01:56.154627 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 14:01:56.161139 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 14:01:56.167754 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 14:01:56.174520 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 14:01:56.181204 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 14:01:56.187656 1971155 kubeadm.go:392] StartCluster: {Name:old-k8s-version-191446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:01:56.187767 1971155 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 14:01:56.187860 1971155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:01:56.233626 1971155 cri.go:89] found id: ""
	I0120 14:01:56.233718 1971155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 14:01:56.245027 1971155 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 14:01:56.245062 1971155 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 14:01:56.245126 1971155 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 14:01:56.258403 1971155 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 14:01:56.259211 1971155 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-191446" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:01:56.259525 1971155 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-1920423/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-191446" cluster setting kubeconfig missing "old-k8s-version-191446" context setting]
	I0120 14:01:56.260060 1971155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:01:56.288258 1971155 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 14:01:56.302812 1971155 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.215
	I0120 14:01:56.302855 1971155 kubeadm.go:1160] stopping kube-system containers ...
	I0120 14:01:56.302872 1971155 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 14:01:56.302942 1971155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:01:56.343694 1971155 cri.go:89] found id: ""
	I0120 14:01:56.343794 1971155 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 14:01:56.364228 1971155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:01:56.375163 1971155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:01:56.375187 1971155 kubeadm.go:157] found existing configuration files:
	
	I0120 14:01:56.375260 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:01:56.386527 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:01:56.386622 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:01:56.398715 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:01:56.410031 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:01:56.410112 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:01:56.420983 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:01:56.433109 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:01:56.433192 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:01:56.447385 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:01:56.460977 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:01:56.461066 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:01:56.472124 1971155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:01:56.484344 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:56.617563 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:57.344622 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:57.621080 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:57.732306 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:57.856823 1971155 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:01:57.856931 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:01:58.357005 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:01:58.857625 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:01:59.358085 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:01:59.857398 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:00.357930 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:00.857134 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:01.357106 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:01.857163 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:02.357462 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:02.857734 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:03.357569 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:03.857955 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:04.357274 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:04.857819 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:05.357138 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:05.857025 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:06.357050 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:06.857399 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:07.357029 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:07.857904 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:08.357419 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:08.857241 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:09.357914 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:09.857010 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:10.357118 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:10.857037 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:11.357243 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:11.857017 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:12.357401 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:12.857737 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:13.358043 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:13.857191 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:14.357168 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:14.857760 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:15.357900 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:15.857889 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:16.357039 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:16.857812 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:17.358144 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:17.857538 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:18.357133 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:18.857266 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:19.357682 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:19.857168 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:20.357018 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:20.857784 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:21.357312 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:21.857374 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:22.357052 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:22.857953 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:23.357118 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:23.857846 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:24.357974 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:24.858083 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:25.357532 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:25.857724 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:26.357640 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:26.857695 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:27.357848 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:27.857637 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:28.357980 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:28.857073 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:29.357768 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:29.857689 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:30.358021 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:30.857725 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:31.357087 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:31.857093 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:32.358124 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:32.857233 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:33.357972 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:33.857268 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:34.357580 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:34.857317 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:35.357391 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:35.858044 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:36.357666 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:36.857501 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:37.357800 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:37.857302 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:38.357923 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:38.857475 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:39.357375 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:39.857802 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:40.357852 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:40.857000 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:41.357100 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:41.857256 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:42.357310 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:42.857156 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:43.357487 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:43.857399 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:44.357134 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:44.857807 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:45.358043 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:45.857787 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:46.357476 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:46.857480 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:47.357059 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:47.857389 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:48.357917 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:48.857908 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:49.357865 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:49.857103 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:50.357844 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:50.856981 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:51.357722 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:51.857389 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:52.357276 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:52.857418 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:53.357813 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:53.857620 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:54.357209 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:54.857914 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:55.357510 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:55.857571 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:56.357067 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:56.857492 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:57.357062 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:57.857477 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:02:57.857614 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:02:57.905881 1971155 cri.go:89] found id: ""
	I0120 14:02:57.905912 1971155 logs.go:282] 0 containers: []
	W0120 14:02:57.905922 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:02:57.905929 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:02:57.905992 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:02:57.943622 1971155 cri.go:89] found id: ""
	I0120 14:02:57.943651 1971155 logs.go:282] 0 containers: []
	W0120 14:02:57.943661 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:02:57.943667 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:02:57.943723 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:02:57.988526 1971155 cri.go:89] found id: ""
	I0120 14:02:57.988562 1971155 logs.go:282] 0 containers: []
	W0120 14:02:57.988574 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:02:57.988583 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:02:57.988651 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:02:58.031485 1971155 cri.go:89] found id: ""
	I0120 14:02:58.031521 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.031534 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:02:58.031543 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:02:58.031610 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:02:58.068567 1971155 cri.go:89] found id: ""
	I0120 14:02:58.068598 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.068607 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:02:58.068613 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:02:58.068671 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:02:58.111132 1971155 cri.go:89] found id: ""
	I0120 14:02:58.111163 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.111172 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:02:58.111179 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:02:58.111249 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:02:58.148303 1971155 cri.go:89] found id: ""
	I0120 14:02:58.148347 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.148360 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:02:58.148369 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:02:58.148451 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:02:58.185950 1971155 cri.go:89] found id: ""
	I0120 14:02:58.185999 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.186012 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:02:58.186045 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:02:58.186067 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:02:58.240918 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:02:58.240967 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:02:58.257093 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:02:58.257146 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:02:58.414616 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:02:58.414647 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:02:58.414668 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:02:58.492488 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:02:58.492552 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:01.040468 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:01.055229 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:01.055334 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:01.096466 1971155 cri.go:89] found id: ""
	I0120 14:03:01.096504 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.096517 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:01.096527 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:01.096598 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:01.134935 1971155 cri.go:89] found id: ""
	I0120 14:03:01.134970 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.134981 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:01.134991 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:01.135067 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:01.173227 1971155 cri.go:89] found id: ""
	I0120 14:03:01.173260 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.173270 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:01.173276 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:01.173330 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:01.214239 1971155 cri.go:89] found id: ""
	I0120 14:03:01.214284 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.214295 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:01.214305 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:01.214371 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:01.256599 1971155 cri.go:89] found id: ""
	I0120 14:03:01.256637 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.256650 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:01.256659 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:01.256739 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:01.296996 1971155 cri.go:89] found id: ""
	I0120 14:03:01.297032 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.297061 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:01.297070 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:01.297138 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:01.332783 1971155 cri.go:89] found id: ""
	I0120 14:03:01.332823 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.332834 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:01.332843 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:01.332918 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:01.369365 1971155 cri.go:89] found id: ""
	I0120 14:03:01.369406 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.369421 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:01.369434 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:01.369451 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:01.414439 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:01.414480 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:01.471195 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:01.471246 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:01.486430 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:01.486462 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:01.574798 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:01.574828 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:01.574845 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:04.171235 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:04.188065 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:04.188156 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:04.228357 1971155 cri.go:89] found id: ""
	I0120 14:03:04.228387 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.228400 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:04.228409 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:04.228467 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:04.267565 1971155 cri.go:89] found id: ""
	I0120 14:03:04.267610 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.267624 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:04.267635 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:04.267711 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:04.307392 1971155 cri.go:89] found id: ""
	I0120 14:03:04.307425 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.307434 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:04.307440 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:04.307508 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:04.349729 1971155 cri.go:89] found id: ""
	I0120 14:03:04.349767 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.349778 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:04.349786 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:04.349870 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:04.387475 1971155 cri.go:89] found id: ""
	I0120 14:03:04.387501 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.387509 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:04.387516 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:04.387572 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:04.427468 1971155 cri.go:89] found id: ""
	I0120 14:03:04.427509 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.427530 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:04.427539 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:04.427612 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:04.466639 1971155 cri.go:89] found id: ""
	I0120 14:03:04.466670 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.466679 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:04.466686 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:04.466741 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:04.504757 1971155 cri.go:89] found id: ""
	I0120 14:03:04.504787 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.504795 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:04.504806 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:04.504818 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:04.557733 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:04.557779 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:04.573354 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:04.573387 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:04.650417 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:04.650446 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:04.650463 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:04.733072 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:04.733120 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:07.274982 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:07.290100 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:07.290193 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:07.332977 1971155 cri.go:89] found id: ""
	I0120 14:03:07.333017 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.333029 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:07.333038 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:07.333115 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:07.372892 1971155 cri.go:89] found id: ""
	I0120 14:03:07.372933 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.372945 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:07.372954 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:07.373026 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:07.425530 1971155 cri.go:89] found id: ""
	I0120 14:03:07.425577 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.425590 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:07.425600 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:07.425662 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:07.476155 1971155 cri.go:89] found id: ""
	I0120 14:03:07.476184 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.476193 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:07.476199 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:07.476254 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:07.521877 1971155 cri.go:89] found id: ""
	I0120 14:03:07.521914 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.521926 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:07.521939 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:07.522011 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:07.560355 1971155 cri.go:89] found id: ""
	I0120 14:03:07.560395 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.560409 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:07.560418 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:07.560487 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:07.600264 1971155 cri.go:89] found id: ""
	I0120 14:03:07.600300 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.600312 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:07.600320 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:07.600394 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:07.638852 1971155 cri.go:89] found id: ""
	I0120 14:03:07.638882 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.638891 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:07.638904 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:07.638921 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:07.697341 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:07.697388 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:07.712419 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:07.712453 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:07.790196 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:07.790219 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:07.790236 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:07.865638 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:07.865691 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:10.411816 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:10.425923 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:10.425995 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:10.469227 1971155 cri.go:89] found id: ""
	I0120 14:03:10.469260 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.469271 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:10.469279 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:10.469335 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:10.507955 1971155 cri.go:89] found id: ""
	I0120 14:03:10.507982 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.507991 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:10.507997 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:10.508064 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:10.543101 1971155 cri.go:89] found id: ""
	I0120 14:03:10.543127 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.543135 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:10.543141 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:10.543211 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:10.585664 1971155 cri.go:89] found id: ""
	I0120 14:03:10.585707 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.585722 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:10.585731 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:10.585798 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:10.623476 1971155 cri.go:89] found id: ""
	I0120 14:03:10.623509 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.623519 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:10.623526 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:10.623696 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:10.660175 1971155 cri.go:89] found id: ""
	I0120 14:03:10.660212 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.660236 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:10.660243 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:10.660328 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:10.701559 1971155 cri.go:89] found id: ""
	I0120 14:03:10.701587 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.701595 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:10.701601 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:10.701660 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:10.745904 1971155 cri.go:89] found id: ""
	I0120 14:03:10.745934 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.745946 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:10.745960 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:10.745977 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:10.797159 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:10.797195 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:10.811080 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:10.811114 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:10.892397 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:10.892453 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:10.892474 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:10.974483 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:10.974548 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:13.520017 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:13.534970 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:13.535057 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:13.572408 1971155 cri.go:89] found id: ""
	I0120 14:03:13.572447 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.572460 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:13.572469 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:13.572551 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:13.611551 1971155 cri.go:89] found id: ""
	I0120 14:03:13.611584 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.611594 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:13.611602 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:13.611679 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:13.648597 1971155 cri.go:89] found id: ""
	I0120 14:03:13.648643 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.648659 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:13.648669 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:13.648746 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:13.688240 1971155 cri.go:89] found id: ""
	I0120 14:03:13.688273 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.688284 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:13.688292 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:13.688359 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:13.726824 1971155 cri.go:89] found id: ""
	I0120 14:03:13.726858 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.726870 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:13.726879 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:13.726960 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:13.763355 1971155 cri.go:89] found id: ""
	I0120 14:03:13.763393 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.763406 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:13.763426 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:13.763520 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:13.805672 1971155 cri.go:89] found id: ""
	I0120 14:03:13.805709 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.805721 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:13.805729 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:13.805808 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:13.843604 1971155 cri.go:89] found id: ""
	I0120 14:03:13.843639 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.843647 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:13.843658 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:13.843677 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:13.900719 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:13.900769 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:13.917734 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:13.917769 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:13.989979 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:13.990004 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:13.990023 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:14.065519 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:14.065568 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:16.608887 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:16.624966 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:16.625095 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:16.663250 1971155 cri.go:89] found id: ""
	I0120 14:03:16.663286 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.663299 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:16.663309 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:16.663381 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:16.705075 1971155 cri.go:89] found id: ""
	I0120 14:03:16.705109 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.705121 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:16.705129 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:16.705203 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:16.743136 1971155 cri.go:89] found id: ""
	I0120 14:03:16.743172 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.743183 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:16.743196 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:16.743259 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:16.781721 1971155 cri.go:89] found id: ""
	I0120 14:03:16.781749 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.781759 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:16.781768 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:16.781838 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:16.819156 1971155 cri.go:89] found id: ""
	I0120 14:03:16.819186 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.819195 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:16.819201 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:16.819267 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:16.857239 1971155 cri.go:89] found id: ""
	I0120 14:03:16.857271 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.857282 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:16.857291 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:16.857366 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:16.896447 1971155 cri.go:89] found id: ""
	I0120 14:03:16.896484 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.896494 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:16.896500 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:16.896573 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:16.933838 1971155 cri.go:89] found id: ""
	I0120 14:03:16.933868 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.933884 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:16.933895 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:16.933912 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:16.947603 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:16.947641 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:17.030769 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:17.030797 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:17.030817 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:17.113685 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:17.113733 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:17.156727 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:17.156762 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:19.718569 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:19.732512 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:19.732591 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:19.767932 1971155 cri.go:89] found id: ""
	I0120 14:03:19.767967 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.767978 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:19.767986 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:19.768060 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:19.803810 1971155 cri.go:89] found id: ""
	I0120 14:03:19.803849 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.803862 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:19.803870 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:19.803939 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:19.843834 1971155 cri.go:89] found id: ""
	I0120 14:03:19.843862 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.843873 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:19.843886 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:19.843958 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:19.881732 1971155 cri.go:89] found id: ""
	I0120 14:03:19.881763 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.881774 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:19.881781 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:19.881848 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:19.924381 1971155 cri.go:89] found id: ""
	I0120 14:03:19.924417 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.924428 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:19.924437 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:19.924502 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:19.970958 1971155 cri.go:89] found id: ""
	I0120 14:03:19.970987 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.970996 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:19.971004 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:19.971065 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:20.012745 1971155 cri.go:89] found id: ""
	I0120 14:03:20.012781 1971155 logs.go:282] 0 containers: []
	W0120 14:03:20.012792 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:20.012800 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:20.012874 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:20.051390 1971155 cri.go:89] found id: ""
	I0120 14:03:20.051440 1971155 logs.go:282] 0 containers: []
	W0120 14:03:20.051458 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:20.051472 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:20.051496 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:20.110400 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:20.110442 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:20.127460 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:20.127494 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:20.204395 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:20.204421 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:20.204438 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:20.285467 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:20.285512 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:22.839418 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:22.853700 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:22.853779 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:22.889955 1971155 cri.go:89] found id: ""
	I0120 14:03:22.889984 1971155 logs.go:282] 0 containers: []
	W0120 14:03:22.889992 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:22.889998 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:22.890051 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:22.927006 1971155 cri.go:89] found id: ""
	I0120 14:03:22.927035 1971155 logs.go:282] 0 containers: []
	W0120 14:03:22.927044 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:22.927050 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:22.927114 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:22.964259 1971155 cri.go:89] found id: ""
	I0120 14:03:22.964295 1971155 logs.go:282] 0 containers: []
	W0120 14:03:22.964321 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:22.964330 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:22.964394 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:23.002226 1971155 cri.go:89] found id: ""
	I0120 14:03:23.002259 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.002268 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:23.002274 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:23.002331 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:23.039583 1971155 cri.go:89] found id: ""
	I0120 14:03:23.039620 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.039633 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:23.039641 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:23.039722 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:23.078733 1971155 cri.go:89] found id: ""
	I0120 14:03:23.078761 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.078770 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:23.078803 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:23.078878 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:23.114333 1971155 cri.go:89] found id: ""
	I0120 14:03:23.114390 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.114403 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:23.114411 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:23.114485 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:23.150761 1971155 cri.go:89] found id: ""
	I0120 14:03:23.150797 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.150809 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:23.150824 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:23.150839 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:23.213320 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:23.213384 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:23.228681 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:23.228717 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:23.301816 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:23.301842 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:23.301858 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:23.387061 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:23.387117 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:25.931823 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:25.945038 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:25.945134 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:25.981262 1971155 cri.go:89] found id: ""
	I0120 14:03:25.981315 1971155 logs.go:282] 0 containers: []
	W0120 14:03:25.981330 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:25.981340 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:25.981420 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:26.018945 1971155 cri.go:89] found id: ""
	I0120 14:03:26.018980 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.018993 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:26.019001 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:26.019080 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:26.060446 1971155 cri.go:89] found id: ""
	I0120 14:03:26.060477 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.060487 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:26.060496 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:26.060563 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:26.097720 1971155 cri.go:89] found id: ""
	I0120 14:03:26.097761 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.097782 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:26.097792 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:26.097861 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:26.133561 1971155 cri.go:89] found id: ""
	I0120 14:03:26.133593 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.133605 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:26.133614 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:26.133701 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:26.175091 1971155 cri.go:89] found id: ""
	I0120 14:03:26.175124 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.175136 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:26.175144 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:26.175206 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:26.214747 1971155 cri.go:89] found id: ""
	I0120 14:03:26.214779 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.214788 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:26.214794 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:26.214864 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:26.264211 1971155 cri.go:89] found id: ""
	I0120 14:03:26.264244 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.264255 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:26.264269 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:26.264291 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:26.282025 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:26.282062 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:26.359793 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:26.359820 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:26.359842 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:26.447177 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:26.447224 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:26.487488 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:26.487523 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:29.039824 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:29.054535 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:29.054619 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:29.096202 1971155 cri.go:89] found id: ""
	I0120 14:03:29.096233 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.096245 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:29.096254 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:29.096316 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:29.139442 1971155 cri.go:89] found id: ""
	I0120 14:03:29.139475 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.139485 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:29.139492 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:29.139565 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:29.181278 1971155 cri.go:89] found id: ""
	I0120 14:03:29.181320 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.181334 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:29.181343 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:29.181424 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:29.222018 1971155 cri.go:89] found id: ""
	I0120 14:03:29.222049 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.222058 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:29.222072 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:29.222129 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:29.263028 1971155 cri.go:89] found id: ""
	I0120 14:03:29.263071 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.263083 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:29.263092 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:29.263167 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:29.307933 1971155 cri.go:89] found id: ""
	I0120 14:03:29.307965 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.307973 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:29.307980 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:29.308040 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:29.344204 1971155 cri.go:89] found id: ""
	I0120 14:03:29.344237 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.344250 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:29.344258 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:29.344327 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:29.381577 1971155 cri.go:89] found id: ""
	I0120 14:03:29.381604 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.381613 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:29.381623 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:29.381636 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:29.396553 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:29.396592 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:29.476381 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:29.476406 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:29.476420 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:29.552542 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:29.552586 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:29.597585 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:29.597619 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:32.150749 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:32.166160 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:32.166240 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:32.209621 1971155 cri.go:89] found id: ""
	I0120 14:03:32.209657 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.209671 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:32.209682 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:32.209764 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:32.250347 1971155 cri.go:89] found id: ""
	I0120 14:03:32.250386 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.250397 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:32.250405 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:32.250477 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:32.291555 1971155 cri.go:89] found id: ""
	I0120 14:03:32.291587 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.291599 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:32.291607 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:32.291677 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:32.329975 1971155 cri.go:89] found id: ""
	I0120 14:03:32.330015 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.330023 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:32.330030 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:32.330107 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:32.371131 1971155 cri.go:89] found id: ""
	I0120 14:03:32.371170 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.371190 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:32.371199 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:32.371273 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:32.409613 1971155 cri.go:89] found id: ""
	I0120 14:03:32.409653 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.409665 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:32.409672 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:32.409732 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:32.448898 1971155 cri.go:89] found id: ""
	I0120 14:03:32.448932 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.448944 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:32.448953 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:32.449029 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:32.486258 1971155 cri.go:89] found id: ""
	I0120 14:03:32.486295 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.486308 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:32.486323 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:32.486340 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:32.538196 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:32.538238 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:32.553140 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:32.553173 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:32.640124 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:32.640147 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:32.640161 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:32.725556 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:32.725615 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:35.276962 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:35.292662 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:35.292754 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:35.332066 1971155 cri.go:89] found id: ""
	I0120 14:03:35.332099 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.332111 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:35.332119 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:35.332188 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:35.369977 1971155 cri.go:89] found id: ""
	I0120 14:03:35.370010 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.370024 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:35.370030 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:35.370099 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:35.412630 1971155 cri.go:89] found id: ""
	I0120 14:03:35.412663 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.412672 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:35.412680 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:35.412746 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:35.450785 1971155 cri.go:89] found id: ""
	I0120 14:03:35.450819 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.450830 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:35.450838 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:35.450908 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:35.496877 1971155 cri.go:89] found id: ""
	I0120 14:03:35.496930 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.496943 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:35.496950 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:35.497021 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:35.538626 1971155 cri.go:89] found id: ""
	I0120 14:03:35.538662 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.538675 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:35.538684 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:35.538768 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:35.579144 1971155 cri.go:89] found id: ""
	I0120 14:03:35.579181 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.579195 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:35.579204 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:35.579283 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:35.623935 1971155 cri.go:89] found id: ""
	I0120 14:03:35.623985 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.623997 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:35.624038 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:35.624074 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:35.664682 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:35.664716 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:35.722441 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:35.722505 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:35.752215 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:35.752246 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:35.843666 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:35.843692 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:35.843706 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:38.427318 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:38.441690 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:38.441767 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:38.481605 1971155 cri.go:89] found id: ""
	I0120 14:03:38.481636 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.481648 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:38.481655 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:38.481726 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:38.518378 1971155 cri.go:89] found id: ""
	I0120 14:03:38.518415 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.518427 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:38.518436 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:38.518512 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:38.561625 1971155 cri.go:89] found id: ""
	I0120 14:03:38.561674 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.561687 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:38.561696 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:38.561764 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:38.603557 1971155 cri.go:89] found id: ""
	I0120 14:03:38.603585 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.603593 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:38.603600 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:38.603671 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:38.644242 1971155 cri.go:89] found id: ""
	I0120 14:03:38.644276 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.644289 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:38.644298 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:38.644364 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:38.686124 1971155 cri.go:89] found id: ""
	I0120 14:03:38.686154 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.686166 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:38.686175 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:38.686257 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:38.731861 1971155 cri.go:89] found id: ""
	I0120 14:03:38.731896 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.731906 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:38.731915 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:38.732002 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:38.773494 1971155 cri.go:89] found id: ""
	I0120 14:03:38.773522 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.773533 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:38.773579 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:38.773602 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:38.827125 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:38.827168 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:38.841903 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:38.841939 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:38.928392 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:38.928423 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:38.928440 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:39.008227 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:39.008270 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:41.554775 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:41.568912 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:41.568983 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:41.616455 1971155 cri.go:89] found id: ""
	I0120 14:03:41.616483 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.616491 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:41.616505 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:41.616584 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:41.654958 1971155 cri.go:89] found id: ""
	I0120 14:03:41.654995 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.655007 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:41.655014 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:41.655091 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:41.695758 1971155 cri.go:89] found id: ""
	I0120 14:03:41.695800 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.695814 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:41.695824 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:41.695901 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:41.733782 1971155 cri.go:89] found id: ""
	I0120 14:03:41.733815 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.733826 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:41.733834 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:41.733906 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:41.771097 1971155 cri.go:89] found id: ""
	I0120 14:03:41.771129 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.771141 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:41.771150 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:41.771266 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:41.808590 1971155 cri.go:89] found id: ""
	I0120 14:03:41.808629 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.808643 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:41.808652 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:41.808733 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:41.848943 1971155 cri.go:89] found id: ""
	I0120 14:03:41.848971 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.848982 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:41.848990 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:41.849057 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:41.886267 1971155 cri.go:89] found id: ""
	I0120 14:03:41.886302 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.886315 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:41.886328 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:41.886354 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:41.903471 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:41.903519 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:41.980320 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:41.980342 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:41.980358 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:42.060823 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:42.060868 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:42.102476 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:42.102511 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:44.677081 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:44.691997 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:44.692094 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:44.732561 1971155 cri.go:89] found id: ""
	I0120 14:03:44.732599 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.732611 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:44.732620 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:44.732701 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:44.774215 1971155 cri.go:89] found id: ""
	I0120 14:03:44.774250 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.774259 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:44.774266 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:44.774330 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:44.815997 1971155 cri.go:89] found id: ""
	I0120 14:03:44.816031 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.816040 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:44.816046 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:44.816109 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:44.853946 1971155 cri.go:89] found id: ""
	I0120 14:03:44.853984 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.853996 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:44.854004 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:44.854070 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:44.896969 1971155 cri.go:89] found id: ""
	I0120 14:03:44.897006 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.897018 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:44.897028 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:44.897120 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:44.942458 1971155 cri.go:89] found id: ""
	I0120 14:03:44.942496 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.942508 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:44.942518 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:44.942648 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:44.984028 1971155 cri.go:89] found id: ""
	I0120 14:03:44.984059 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.984084 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:44.984094 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:44.984173 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:45.026096 1971155 cri.go:89] found id: ""
	I0120 14:03:45.026130 1971155 logs.go:282] 0 containers: []
	W0120 14:03:45.026141 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:45.026153 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:45.026169 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:45.110471 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:45.110527 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:45.154855 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:45.154892 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:45.214465 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:45.214511 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:45.232020 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:45.232054 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:45.312932 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:47.813923 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:47.828326 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:47.828422 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:47.865843 1971155 cri.go:89] found id: ""
	I0120 14:03:47.865875 1971155 logs.go:282] 0 containers: []
	W0120 14:03:47.865884 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:47.865891 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:47.865952 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:47.913554 1971155 cri.go:89] found id: ""
	I0120 14:03:47.913582 1971155 logs.go:282] 0 containers: []
	W0120 14:03:47.913590 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:47.913597 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:47.913655 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:47.970084 1971155 cri.go:89] found id: ""
	I0120 14:03:47.970115 1971155 logs.go:282] 0 containers: []
	W0120 14:03:47.970135 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:47.970144 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:47.970205 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:48.016631 1971155 cri.go:89] found id: ""
	I0120 14:03:48.016737 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.016750 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:48.016758 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:48.016833 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:48.073208 1971155 cri.go:89] found id: ""
	I0120 14:03:48.073253 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.073266 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:48.073276 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:48.073387 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:48.111638 1971155 cri.go:89] found id: ""
	I0120 14:03:48.111680 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.111692 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:48.111701 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:48.111783 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:48.155605 1971155 cri.go:89] found id: ""
	I0120 14:03:48.155640 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.155653 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:48.155661 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:48.155732 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:48.204162 1971155 cri.go:89] found id: ""
	I0120 14:03:48.204204 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.204219 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:48.204234 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:48.204257 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:48.259987 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:48.260042 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:48.275801 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:48.275832 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:48.361115 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:48.361150 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:48.361170 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:48.443876 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:48.443921 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:50.992981 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:51.009283 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:51.009370 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:51.052492 1971155 cri.go:89] found id: ""
	I0120 14:03:51.052523 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.052533 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:51.052540 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:51.052616 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:51.096548 1971155 cri.go:89] found id: ""
	I0120 14:03:51.096575 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.096583 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:51.096589 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:51.096655 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:51.138339 1971155 cri.go:89] found id: ""
	I0120 14:03:51.138369 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.138378 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:51.138385 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:51.138456 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:51.181155 1971155 cri.go:89] found id: ""
	I0120 14:03:51.181188 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.181198 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:51.181205 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:51.181261 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:51.223988 1971155 cri.go:89] found id: ""
	I0120 14:03:51.224026 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.224038 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:51.224045 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:51.224106 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:51.261863 1971155 cri.go:89] found id: ""
	I0120 14:03:51.261896 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.261905 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:51.261911 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:51.261976 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:51.303816 1971155 cri.go:89] found id: ""
	I0120 14:03:51.303850 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.303862 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:51.303870 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:51.303946 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:51.340897 1971155 cri.go:89] found id: ""
	I0120 14:03:51.340935 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.340946 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:51.340960 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:51.340983 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:51.393462 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:51.393512 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:51.409330 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:51.409361 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:51.483485 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:51.483510 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:51.483525 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:51.560879 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:51.560920 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:54.106090 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:54.121203 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:54.121282 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:54.171790 1971155 cri.go:89] found id: ""
	I0120 14:03:54.171818 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.171826 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:54.171833 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:54.171888 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:54.215021 1971155 cri.go:89] found id: ""
	I0120 14:03:54.215058 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.215069 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:54.215076 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:54.215138 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:54.252537 1971155 cri.go:89] found id: ""
	I0120 14:03:54.252565 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.252573 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:54.252580 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:54.252635 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:54.291366 1971155 cri.go:89] found id: ""
	I0120 14:03:54.291396 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.291405 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:54.291411 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:54.291482 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:54.328162 1971155 cri.go:89] found id: ""
	I0120 14:03:54.328206 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.328219 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:54.328227 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:54.328310 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:54.366862 1971155 cri.go:89] found id: ""
	I0120 14:03:54.366898 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.366908 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:54.366920 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:54.366996 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:54.404501 1971155 cri.go:89] found id: ""
	I0120 14:03:54.404534 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.404543 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:54.404549 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:54.404609 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:54.443468 1971155 cri.go:89] found id: ""
	I0120 14:03:54.443504 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.443518 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:54.443531 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:54.443554 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:54.458948 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:54.458993 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:54.542353 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:54.542379 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:54.542400 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:54.629014 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:54.629060 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:54.673822 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:54.673857 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:57.228212 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:57.242552 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:57.242667 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:57.282187 1971155 cri.go:89] found id: ""
	I0120 14:03:57.282215 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.282225 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:57.282232 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:57.282306 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:57.319233 1971155 cri.go:89] found id: ""
	I0120 14:03:57.319260 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.319268 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:57.319279 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:57.319340 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:57.356706 1971155 cri.go:89] found id: ""
	I0120 14:03:57.356730 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.356738 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:57.356744 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:57.356805 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:57.396553 1971155 cri.go:89] found id: ""
	I0120 14:03:57.396583 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.396594 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:57.396600 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:57.396657 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:57.434802 1971155 cri.go:89] found id: ""
	I0120 14:03:57.434835 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.434847 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:57.434855 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:57.434927 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:57.471668 1971155 cri.go:89] found id: ""
	I0120 14:03:57.471699 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.471710 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:57.471719 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:57.471789 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:57.512283 1971155 cri.go:89] found id: ""
	I0120 14:03:57.512318 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.512329 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:57.512337 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:57.512409 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:57.549948 1971155 cri.go:89] found id: ""
	I0120 14:03:57.549977 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.549986 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:57.549996 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:57.550010 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:57.639160 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:57.639213 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:57.685920 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:57.685954 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:57.743891 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:57.743935 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:57.760181 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:57.760223 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:57.840777 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:00.342573 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:00.360314 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:00.360397 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:00.407962 1971155 cri.go:89] found id: ""
	I0120 14:04:00.407997 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.408010 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:00.408020 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:00.408086 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:00.450962 1971155 cri.go:89] found id: ""
	I0120 14:04:00.451040 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.451053 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:00.451062 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:00.451129 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:00.487180 1971155 cri.go:89] found id: ""
	I0120 14:04:00.487216 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.487227 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:00.487239 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:00.487311 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:00.530835 1971155 cri.go:89] found id: ""
	I0120 14:04:00.530864 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.530873 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:00.530880 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:00.530948 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:00.570212 1971155 cri.go:89] found id: ""
	I0120 14:04:00.570245 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.570257 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:00.570265 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:00.570335 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:00.611685 1971155 cri.go:89] found id: ""
	I0120 14:04:00.611716 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.611725 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:00.611731 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:00.611785 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:00.649370 1971155 cri.go:89] found id: ""
	I0120 14:04:00.649410 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.649423 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:00.649432 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:00.649498 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:00.685853 1971155 cri.go:89] found id: ""
	I0120 14:04:00.685889 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.685901 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:00.685915 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:00.685930 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:00.737015 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:00.737051 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:00.751682 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:00.751716 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:00.830222 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:00.830247 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:00.830262 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:00.918955 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:00.919003 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:03.461705 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:03.478063 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:03.478144 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:03.525289 1971155 cri.go:89] found id: ""
	I0120 14:04:03.525326 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.525339 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:03.525349 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:03.525427 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:03.565302 1971155 cri.go:89] found id: ""
	I0120 14:04:03.565339 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.565351 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:03.565360 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:03.565441 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:03.607021 1971155 cri.go:89] found id: ""
	I0120 14:04:03.607048 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.607056 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:03.607063 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:03.607122 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:03.650398 1971155 cri.go:89] found id: ""
	I0120 14:04:03.650425 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.650433 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:03.650445 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:03.650513 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:03.689498 1971155 cri.go:89] found id: ""
	I0120 14:04:03.689531 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.689539 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:03.689545 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:03.689607 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:03.726928 1971155 cri.go:89] found id: ""
	I0120 14:04:03.726965 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.726978 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:03.726987 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:03.727054 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:03.764493 1971155 cri.go:89] found id: ""
	I0120 14:04:03.764532 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.764544 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:03.764552 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:03.764622 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:03.803514 1971155 cri.go:89] found id: ""
	I0120 14:04:03.803550 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.803562 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:03.803575 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:03.803595 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:03.847009 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:03.847045 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:03.900078 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:03.900124 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:03.916146 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:03.916179 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:03.988068 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:03.988102 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:03.988121 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:06.568829 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:06.583335 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:06.583422 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:06.628247 1971155 cri.go:89] found id: ""
	I0120 14:04:06.628283 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.628296 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:06.628305 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:06.628365 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:06.673764 1971155 cri.go:89] found id: ""
	I0120 14:04:06.673792 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.673804 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:06.673820 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:06.673892 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:06.714328 1971155 cri.go:89] found id: ""
	I0120 14:04:06.714361 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.714373 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:06.714381 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:06.714458 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:06.750935 1971155 cri.go:89] found id: ""
	I0120 14:04:06.750975 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.750987 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:06.750996 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:06.751061 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:06.788944 1971155 cri.go:89] found id: ""
	I0120 14:04:06.788975 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.788982 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:06.788988 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:06.789056 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:06.826176 1971155 cri.go:89] found id: ""
	I0120 14:04:06.826216 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.826228 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:06.826245 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:06.826322 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:06.864607 1971155 cri.go:89] found id: ""
	I0120 14:04:06.864640 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.864649 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:06.864656 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:06.864741 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:06.901814 1971155 cri.go:89] found id: ""
	I0120 14:04:06.901889 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.901909 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:06.901922 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:06.901944 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:06.953391 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:06.953439 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:06.967876 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:06.967914 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:07.055449 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:07.055486 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:07.055511 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:07.138279 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:07.138328 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:09.684182 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:09.699353 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:09.699432 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:09.738834 1971155 cri.go:89] found id: ""
	I0120 14:04:09.738864 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.738875 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:09.738883 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:09.738963 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:09.774822 1971155 cri.go:89] found id: ""
	I0120 14:04:09.774852 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.774864 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:09.774872 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:09.774942 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:09.813132 1971155 cri.go:89] found id: ""
	I0120 14:04:09.813167 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.813179 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:09.813187 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:09.813258 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:09.850809 1971155 cri.go:89] found id: ""
	I0120 14:04:09.850844 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.850855 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:09.850864 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:09.850947 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:09.889768 1971155 cri.go:89] found id: ""
	I0120 14:04:09.889802 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.889813 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:09.889821 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:09.889900 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:09.932037 1971155 cri.go:89] found id: ""
	I0120 14:04:09.932073 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.932081 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:09.932087 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:09.932150 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:09.970153 1971155 cri.go:89] found id: ""
	I0120 14:04:09.970197 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.970210 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:09.970218 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:09.970287 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:10.009506 1971155 cri.go:89] found id: ""
	I0120 14:04:10.009535 1971155 logs.go:282] 0 containers: []
	W0120 14:04:10.009544 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:10.009555 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:10.009568 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:10.097837 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:10.097896 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:10.140488 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:10.140534 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:10.195531 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:10.195575 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:10.210277 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:10.210322 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:10.296146 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:12.796944 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:12.810984 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:12.811085 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:12.849374 1971155 cri.go:89] found id: ""
	I0120 14:04:12.849413 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.849426 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:12.849435 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:12.849509 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:12.885922 1971155 cri.go:89] found id: ""
	I0120 14:04:12.885951 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.885960 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:12.885967 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:12.886039 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:12.922978 1971155 cri.go:89] found id: ""
	I0120 14:04:12.923019 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.923031 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:12.923040 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:12.923108 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:12.960519 1971155 cri.go:89] found id: ""
	I0120 14:04:12.960563 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.960572 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:12.960578 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:12.960688 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:12.997662 1971155 cri.go:89] found id: ""
	I0120 14:04:12.997702 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.997715 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:12.997724 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:12.997803 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:13.035613 1971155 cri.go:89] found id: ""
	I0120 14:04:13.035640 1971155 logs.go:282] 0 containers: []
	W0120 14:04:13.035651 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:13.035660 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:13.035736 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:13.073354 1971155 cri.go:89] found id: ""
	I0120 14:04:13.073389 1971155 logs.go:282] 0 containers: []
	W0120 14:04:13.073401 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:13.073410 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:13.073480 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:13.113735 1971155 cri.go:89] found id: ""
	I0120 14:04:13.113771 1971155 logs.go:282] 0 containers: []
	W0120 14:04:13.113780 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:13.113791 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:13.113804 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:13.170858 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:13.170906 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:13.186341 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:13.186375 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:13.260514 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:13.260540 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:13.260557 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:13.347360 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:13.347411 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:15.891859 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:15.907144 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:15.907238 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:15.943638 1971155 cri.go:89] found id: ""
	I0120 14:04:15.943675 1971155 logs.go:282] 0 containers: []
	W0120 14:04:15.943686 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:15.943693 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:15.943753 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:15.981820 1971155 cri.go:89] found id: ""
	I0120 14:04:15.981868 1971155 logs.go:282] 0 containers: []
	W0120 14:04:15.981882 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:15.981891 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:15.981971 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:16.019987 1971155 cri.go:89] found id: ""
	I0120 14:04:16.020058 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.020071 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:16.020080 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:16.020156 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:16.059245 1971155 cri.go:89] found id: ""
	I0120 14:04:16.059278 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.059288 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:16.059295 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:16.059370 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:16.095081 1971155 cri.go:89] found id: ""
	I0120 14:04:16.095123 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.095136 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:16.095146 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:16.095224 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:16.134357 1971155 cri.go:89] found id: ""
	I0120 14:04:16.134403 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.134416 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:16.134425 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:16.134497 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:16.177729 1971155 cri.go:89] found id: ""
	I0120 14:04:16.177762 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.177774 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:16.177783 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:16.177864 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:16.214324 1971155 cri.go:89] found id: ""
	I0120 14:04:16.214360 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.214371 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:16.214392 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:16.214412 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:16.270670 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:16.270716 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:16.326541 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:16.326587 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:16.343430 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:16.343469 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:16.429522 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:16.429554 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:16.429572 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:19.008712 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:19.024398 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:19.024489 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:19.064138 1971155 cri.go:89] found id: ""
	I0120 14:04:19.064169 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.064178 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:19.064184 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:19.064253 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:19.102639 1971155 cri.go:89] found id: ""
	I0120 14:04:19.102672 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.102681 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:19.102687 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:19.102755 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:19.141058 1971155 cri.go:89] found id: ""
	I0120 14:04:19.141105 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.141119 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:19.141130 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:19.141200 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:19.179972 1971155 cri.go:89] found id: ""
	I0120 14:04:19.180004 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.180013 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:19.180025 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:19.180095 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:19.219516 1971155 cri.go:89] found id: ""
	I0120 14:04:19.219549 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.219562 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:19.219571 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:19.219634 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:19.262728 1971155 cri.go:89] found id: ""
	I0120 14:04:19.262764 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.262776 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:19.262785 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:19.262860 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:19.299472 1971155 cri.go:89] found id: ""
	I0120 14:04:19.299527 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.299539 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:19.299548 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:19.299634 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:19.341054 1971155 cri.go:89] found id: ""
	I0120 14:04:19.341095 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.341107 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:19.341119 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:19.341133 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:19.426002 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:19.426058 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:19.469471 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:19.469504 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:19.524625 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:19.524661 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:19.539365 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:19.539398 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:19.620545 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:22.122261 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:22.137515 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:22.137590 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:22.177366 1971155 cri.go:89] found id: ""
	I0120 14:04:22.177405 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.177417 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:22.177426 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:22.177494 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:22.215596 1971155 cri.go:89] found id: ""
	I0120 14:04:22.215641 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.215653 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:22.215662 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:22.215734 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:22.252783 1971155 cri.go:89] found id: ""
	I0120 14:04:22.252820 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.252832 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:22.252841 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:22.252917 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:22.295160 1971155 cri.go:89] found id: ""
	I0120 14:04:22.295199 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.295213 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:22.295221 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:22.295284 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:22.334614 1971155 cri.go:89] found id: ""
	I0120 14:04:22.334651 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.334662 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:22.334672 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:22.334754 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:22.372516 1971155 cri.go:89] found id: ""
	I0120 14:04:22.372545 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.372554 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:22.372562 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:22.372633 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:22.412784 1971155 cri.go:89] found id: ""
	I0120 14:04:22.412819 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.412827 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:22.412833 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:22.412895 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:22.449865 1971155 cri.go:89] found id: ""
	I0120 14:04:22.449900 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.449909 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:22.449920 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:22.449934 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:22.464473 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:22.464514 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:22.546804 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:22.546835 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:22.546858 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:22.624614 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:22.624664 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:22.679053 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:22.679085 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:25.238495 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:25.254177 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:25.254253 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:25.299255 1971155 cri.go:89] found id: ""
	I0120 14:04:25.299291 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.299300 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:25.299308 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:25.299373 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:25.337454 1971155 cri.go:89] found id: ""
	I0120 14:04:25.337481 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.337490 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:25.337496 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:25.337556 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:25.375094 1971155 cri.go:89] found id: ""
	I0120 14:04:25.375129 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.375139 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:25.375148 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:25.375224 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:25.413177 1971155 cri.go:89] found id: ""
	I0120 14:04:25.413206 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.413217 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:25.413223 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:25.413288 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:25.448775 1971155 cri.go:89] found id: ""
	I0120 14:04:25.448812 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.448821 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:25.448827 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:25.448883 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:25.484560 1971155 cri.go:89] found id: ""
	I0120 14:04:25.484591 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.484600 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:25.484607 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:25.484660 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:25.522990 1971155 cri.go:89] found id: ""
	I0120 14:04:25.523029 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.523041 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:25.523049 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:25.523128 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:25.560861 1971155 cri.go:89] found id: ""
	I0120 14:04:25.560899 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.560910 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:25.560925 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:25.560941 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:25.614479 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:25.614528 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:25.630030 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:25.630070 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:25.704721 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:25.704758 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:25.704781 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:25.782265 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:25.782309 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:28.332905 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:28.351517 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:28.351594 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:28.394070 1971155 cri.go:89] found id: ""
	I0120 14:04:28.394110 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.394122 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:28.394130 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:28.394204 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:28.445893 1971155 cri.go:89] found id: ""
	I0120 14:04:28.445924 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.445934 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:28.445940 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:28.446034 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:28.511766 1971155 cri.go:89] found id: ""
	I0120 14:04:28.511801 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.511811 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:28.511820 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:28.511891 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:28.558333 1971155 cri.go:89] found id: ""
	I0120 14:04:28.558369 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.558382 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:28.558391 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:28.558469 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:28.608161 1971155 cri.go:89] found id: ""
	I0120 14:04:28.608196 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.608207 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:28.608215 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:28.608288 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:28.645545 1971155 cri.go:89] found id: ""
	I0120 14:04:28.645576 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.645585 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:28.645592 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:28.645651 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:28.682795 1971155 cri.go:89] found id: ""
	I0120 14:04:28.682833 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.682845 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:28.682854 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:28.682943 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:28.719887 1971155 cri.go:89] found id: ""
	I0120 14:04:28.719918 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.719928 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:28.719941 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:28.719965 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:28.776644 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:28.776683 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:28.791778 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:28.791812 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:28.870972 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:28.871001 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:28.871027 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:28.950524 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:28.950568 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:31.494786 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:31.508961 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:31.509041 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:31.550239 1971155 cri.go:89] found id: ""
	I0120 14:04:31.550275 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.550287 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:31.550295 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:31.550374 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:31.589113 1971155 cri.go:89] found id: ""
	I0120 14:04:31.589149 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.589161 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:31.589169 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:31.589271 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:31.626500 1971155 cri.go:89] found id: ""
	I0120 14:04:31.626537 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.626547 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:31.626556 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:31.626655 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:31.661941 1971155 cri.go:89] found id: ""
	I0120 14:04:31.661972 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.661980 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:31.661987 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:31.662079 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:31.699223 1971155 cri.go:89] found id: ""
	I0120 14:04:31.699269 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.699283 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:31.699291 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:31.699359 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:31.736559 1971155 cri.go:89] found id: ""
	I0120 14:04:31.736589 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.736601 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:31.736608 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:31.736680 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:31.774254 1971155 cri.go:89] found id: ""
	I0120 14:04:31.774296 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.774304 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:31.774314 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:31.774460 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:31.813913 1971155 cri.go:89] found id: ""
	I0120 14:04:31.813952 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.813964 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:31.813977 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:31.813991 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:31.864887 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:31.864936 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:31.880250 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:31.880286 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:31.955208 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:31.955232 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:31.955247 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:32.039812 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:32.039875 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:34.582127 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:34.595661 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:34.595751 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:34.637306 1971155 cri.go:89] found id: ""
	I0120 14:04:34.637343 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.637355 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:34.637367 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:34.637440 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:34.676881 1971155 cri.go:89] found id: ""
	I0120 14:04:34.676913 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.676924 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:34.676929 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:34.676985 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:34.715677 1971155 cri.go:89] found id: ""
	I0120 14:04:34.715712 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.715723 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:34.715737 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:34.715801 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:34.754821 1971155 cri.go:89] found id: ""
	I0120 14:04:34.754855 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.754867 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:34.754875 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:34.754947 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:34.793093 1971155 cri.go:89] found id: ""
	I0120 14:04:34.793124 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.793133 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:34.793139 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:34.793200 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:34.830252 1971155 cri.go:89] found id: ""
	I0120 14:04:34.830285 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.830295 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:34.830302 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:34.830370 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:34.869405 1971155 cri.go:89] found id: ""
	I0120 14:04:34.869436 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.869447 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:34.869455 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:34.869528 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:34.910676 1971155 cri.go:89] found id: ""
	I0120 14:04:34.910708 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.910721 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:34.910735 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:34.910751 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:34.961049 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:34.961094 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:34.976224 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:34.976260 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:35.049407 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:35.049434 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:35.049452 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:35.133338 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:35.133396 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:37.676133 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:37.690435 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:37.690520 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:37.732788 1971155 cri.go:89] found id: ""
	I0120 14:04:37.732824 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.732837 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:37.732846 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:37.732914 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:37.770338 1971155 cri.go:89] found id: ""
	I0120 14:04:37.770375 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.770387 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:37.770395 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:37.770461 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:37.813580 1971155 cri.go:89] found id: ""
	I0120 14:04:37.813612 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.813639 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:37.813645 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:37.813702 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:37.854706 1971155 cri.go:89] found id: ""
	I0120 14:04:37.854740 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.854751 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:37.854759 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:37.854841 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:37.891577 1971155 cri.go:89] found id: ""
	I0120 14:04:37.891607 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.891616 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:37.891623 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:37.891681 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:37.928718 1971155 cri.go:89] found id: ""
	I0120 14:04:37.928750 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.928762 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:37.928772 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:37.928844 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:37.964166 1971155 cri.go:89] found id: ""
	I0120 14:04:37.964203 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.964211 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:37.964218 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:37.964279 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:38.005257 1971155 cri.go:89] found id: ""
	I0120 14:04:38.005299 1971155 logs.go:282] 0 containers: []
	W0120 14:04:38.005311 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:38.005325 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:38.005340 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:38.058706 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:38.058756 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:38.073507 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:38.073584 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:38.149050 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:38.149073 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:38.149091 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:38.227105 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:38.227163 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:40.772041 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:40.787399 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:40.787471 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:40.828186 1971155 cri.go:89] found id: ""
	I0120 14:04:40.828226 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.828247 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:40.828257 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:40.828327 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:40.869532 1971155 cri.go:89] found id: ""
	I0120 14:04:40.869561 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.869573 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:40.869581 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:40.869670 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:40.916288 1971155 cri.go:89] found id: ""
	I0120 14:04:40.916324 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.916343 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:40.916357 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:40.916425 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:40.953018 1971155 cri.go:89] found id: ""
	I0120 14:04:40.953053 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.953066 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:40.953076 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:40.953150 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:40.993977 1971155 cri.go:89] found id: ""
	I0120 14:04:40.994012 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.994024 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:40.994033 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:40.994104 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:41.037652 1971155 cri.go:89] found id: ""
	I0120 14:04:41.037678 1971155 logs.go:282] 0 containers: []
	W0120 14:04:41.037685 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:41.037692 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:41.037756 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:41.085826 1971155 cri.go:89] found id: ""
	I0120 14:04:41.085855 1971155 logs.go:282] 0 containers: []
	W0120 14:04:41.085864 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:41.085873 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:41.085950 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:41.128902 1971155 cri.go:89] found id: ""
	I0120 14:04:41.128939 1971155 logs.go:282] 0 containers: []
	W0120 14:04:41.128951 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:41.128965 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:41.128984 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:41.182933 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:41.182976 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:41.198454 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:41.198493 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:41.278062 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:41.278090 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:41.278106 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:41.359935 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:41.359983 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:43.908548 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:43.927397 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:43.927492 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:43.975131 1971155 cri.go:89] found id: ""
	I0120 14:04:43.975160 1971155 logs.go:282] 0 containers: []
	W0120 14:04:43.975169 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:43.975175 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:43.975243 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:44.020970 1971155 cri.go:89] found id: ""
	I0120 14:04:44.021006 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.021018 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:44.021027 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:44.021135 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:44.067873 1971155 cri.go:89] found id: ""
	I0120 14:04:44.067914 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.067927 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:44.067936 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:44.068010 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:44.108047 1971155 cri.go:89] found id: ""
	I0120 14:04:44.108082 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.108093 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:44.108099 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:44.108161 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:44.149416 1971155 cri.go:89] found id: ""
	I0120 14:04:44.149449 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.149458 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:44.149466 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:44.149521 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:44.189664 1971155 cri.go:89] found id: ""
	I0120 14:04:44.189701 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.189712 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:44.189720 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:44.189787 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:44.233518 1971155 cri.go:89] found id: ""
	I0120 14:04:44.233548 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.233558 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:44.233565 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:44.233635 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:44.279568 1971155 cri.go:89] found id: ""
	I0120 14:04:44.279603 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.279614 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:44.279626 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:44.279641 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:44.348693 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:44.348742 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:44.363510 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:44.363546 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:44.437107 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:44.437132 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:44.437146 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:44.516463 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:44.516512 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:47.065723 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:47.081983 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:47.082120 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:47.122906 1971155 cri.go:89] found id: ""
	I0120 14:04:47.122945 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.122958 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:47.122969 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:47.123060 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:47.166879 1971155 cri.go:89] found id: ""
	I0120 14:04:47.166916 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.166928 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:47.166937 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:47.167012 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:47.213675 1971155 cri.go:89] found id: ""
	I0120 14:04:47.213706 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.213715 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:47.213722 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:47.213778 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:47.254655 1971155 cri.go:89] found id: ""
	I0120 14:04:47.254692 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.254702 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:47.254711 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:47.254787 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:47.297680 1971155 cri.go:89] found id: ""
	I0120 14:04:47.297718 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.297731 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:47.297741 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:47.297829 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:47.337150 1971155 cri.go:89] found id: ""
	I0120 14:04:47.337179 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.337188 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:47.337194 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:47.337258 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:47.376190 1971155 cri.go:89] found id: ""
	I0120 14:04:47.376223 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.376234 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:47.376242 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:47.376343 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:47.424425 1971155 cri.go:89] found id: ""
	I0120 14:04:47.424465 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.424477 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:47.424491 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:47.424508 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:47.439773 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:47.439807 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:47.515012 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:47.515040 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:47.515077 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:47.602215 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:47.602253 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:47.647880 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:47.647910 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:50.211849 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:50.225773 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:50.225855 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:50.268626 1971155 cri.go:89] found id: ""
	I0120 14:04:50.268663 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.268676 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:50.268686 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:50.268759 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:50.307523 1971155 cri.go:89] found id: ""
	I0120 14:04:50.307562 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.307575 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:50.307584 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:50.307656 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:50.347783 1971155 cri.go:89] found id: ""
	I0120 14:04:50.347820 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.347832 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:50.347840 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:50.347910 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:50.394427 1971155 cri.go:89] found id: ""
	I0120 14:04:50.394462 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.394474 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:50.394482 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:50.394564 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:50.434136 1971155 cri.go:89] found id: ""
	I0120 14:04:50.434168 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.434178 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:50.434187 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:50.434253 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:50.472220 1971155 cri.go:89] found id: ""
	I0120 14:04:50.472256 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.472268 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:50.472277 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:50.472341 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:50.513511 1971155 cri.go:89] found id: ""
	I0120 14:04:50.513541 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.513552 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:50.513560 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:50.513630 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:50.551073 1971155 cri.go:89] found id: ""
	I0120 14:04:50.551110 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.551121 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:50.551143 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:50.551163 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:50.565714 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:50.565744 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:50.651186 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:50.651214 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:50.651238 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:50.735185 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:50.735234 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:50.780258 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:50.780287 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:53.331081 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:53.346851 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:53.346935 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:53.390862 1971155 cri.go:89] found id: ""
	I0120 14:04:53.390901 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.390915 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:53.390924 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:53.391007 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:53.433455 1971155 cri.go:89] found id: ""
	I0120 14:04:53.433482 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.433491 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:53.433497 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:53.433555 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:53.477771 1971155 cri.go:89] found id: ""
	I0120 14:04:53.477805 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.477817 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:53.477826 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:53.477898 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:53.518330 1971155 cri.go:89] found id: ""
	I0120 14:04:53.518365 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.518375 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:53.518384 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:53.518461 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:53.557755 1971155 cri.go:89] found id: ""
	I0120 14:04:53.557804 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.557817 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:53.557827 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:53.557907 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:53.600681 1971155 cri.go:89] found id: ""
	I0120 14:04:53.600718 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.600730 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:53.600739 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:53.600836 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:53.644255 1971155 cri.go:89] found id: ""
	I0120 14:04:53.644291 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.644302 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:53.644311 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:53.644398 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:53.681445 1971155 cri.go:89] found id: ""
	I0120 14:04:53.681485 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.681498 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:53.681513 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:53.681529 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:53.737076 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:53.737131 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:53.755500 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:53.755551 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:53.846378 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:53.846416 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:53.846435 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:53.956291 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:53.956337 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:56.505456 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:56.521259 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:56.521352 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:56.572379 1971155 cri.go:89] found id: ""
	I0120 14:04:56.572415 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.572427 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:56.572435 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:56.572503 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:56.613123 1971155 cri.go:89] found id: ""
	I0120 14:04:56.613151 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.613162 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:56.613170 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:56.613237 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:56.650863 1971155 cri.go:89] found id: ""
	I0120 14:04:56.650896 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.650904 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:56.650911 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:56.650967 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:56.686709 1971155 cri.go:89] found id: ""
	I0120 14:04:56.686741 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.686749 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:56.686756 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:56.686813 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:56.722765 1971155 cri.go:89] found id: ""
	I0120 14:04:56.722794 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.722802 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:56.722809 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:56.722867 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:56.762188 1971155 cri.go:89] found id: ""
	I0120 14:04:56.762223 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.762235 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:56.762244 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:56.762321 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:56.807714 1971155 cri.go:89] found id: ""
	I0120 14:04:56.807741 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.807750 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:56.807756 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:56.807818 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:56.846817 1971155 cri.go:89] found id: ""
	I0120 14:04:56.846851 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.846860 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:56.846870 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:56.846884 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:56.919562 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:56.919593 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:56.919613 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:57.007957 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:57.008011 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:57.051295 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:57.051339 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:57.104114 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:57.104172 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:59.620229 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:59.637010 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:59.637114 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:59.680984 1971155 cri.go:89] found id: ""
	I0120 14:04:59.681020 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.681032 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:59.681041 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:59.681128 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:59.725445 1971155 cri.go:89] found id: ""
	I0120 14:04:59.725480 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.725492 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:59.725501 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:59.725573 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:59.767962 1971155 cri.go:89] found id: ""
	I0120 14:04:59.767999 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.768012 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:59.768020 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:59.768091 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:59.812201 1971155 cri.go:89] found id: ""
	I0120 14:04:59.812240 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.812252 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:59.812267 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:59.812335 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:59.853005 1971155 cri.go:89] found id: ""
	I0120 14:04:59.853034 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.853043 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:59.853049 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:59.853131 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:59.890747 1971155 cri.go:89] found id: ""
	I0120 14:04:59.890859 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.890878 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:59.890889 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:59.890969 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:59.934050 1971155 cri.go:89] found id: ""
	I0120 14:04:59.934090 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.934102 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:59.934110 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:59.934179 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:59.977069 1971155 cri.go:89] found id: ""
	I0120 14:04:59.977106 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.977119 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:59.977131 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:59.977150 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:00.070208 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:00.070261 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:00.116521 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:00.116557 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:00.175645 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:00.175695 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:00.192183 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:00.192228 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:00.273233 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:02.773877 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:02.788560 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:02.788661 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:02.838025 1971155 cri.go:89] found id: ""
	I0120 14:05:02.838061 1971155 logs.go:282] 0 containers: []
	W0120 14:05:02.838073 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:02.838082 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:02.838152 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:02.879106 1971155 cri.go:89] found id: ""
	I0120 14:05:02.879139 1971155 logs.go:282] 0 containers: []
	W0120 14:05:02.879150 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:02.879158 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:02.879226 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:02.919842 1971155 cri.go:89] found id: ""
	I0120 14:05:02.919883 1971155 logs.go:282] 0 containers: []
	W0120 14:05:02.919896 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:02.919905 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:02.919978 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:02.959612 1971155 cri.go:89] found id: ""
	I0120 14:05:02.959644 1971155 logs.go:282] 0 containers: []
	W0120 14:05:02.959656 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:02.959664 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:02.959737 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:03.018360 1971155 cri.go:89] found id: ""
	I0120 14:05:03.018392 1971155 logs.go:282] 0 containers: []
	W0120 14:05:03.018401 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:03.018408 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:03.018491 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:03.064749 1971155 cri.go:89] found id: ""
	I0120 14:05:03.064779 1971155 logs.go:282] 0 containers: []
	W0120 14:05:03.064788 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:03.064801 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:03.064874 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:03.114566 1971155 cri.go:89] found id: ""
	I0120 14:05:03.114595 1971155 logs.go:282] 0 containers: []
	W0120 14:05:03.114617 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:03.114626 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:03.114695 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:03.163672 1971155 cri.go:89] found id: ""
	I0120 14:05:03.163707 1971155 logs.go:282] 0 containers: []
	W0120 14:05:03.163720 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:03.163733 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:03.163750 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:03.243662 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:03.243718 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:03.261586 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:03.261629 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:03.358343 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:03.358377 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:03.358393 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:03.452803 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:03.452852 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:06.004224 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:06.019368 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:06.019459 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:06.068617 1971155 cri.go:89] found id: ""
	I0120 14:05:06.068655 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.068668 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:06.068678 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:06.068747 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:06.112806 1971155 cri.go:89] found id: ""
	I0120 14:05:06.112859 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.112874 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:06.112883 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:06.112960 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:06.150653 1971155 cri.go:89] found id: ""
	I0120 14:05:06.150695 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.150708 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:06.150716 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:06.150788 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:06.190915 1971155 cri.go:89] found id: ""
	I0120 14:05:06.190958 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.190973 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:06.190992 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:06.191077 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:06.237577 1971155 cri.go:89] found id: ""
	I0120 14:05:06.237616 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.237627 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:06.237636 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:06.237712 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:06.280826 1971155 cri.go:89] found id: ""
	I0120 14:05:06.280861 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.280873 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:06.280883 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:06.280958 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:06.317836 1971155 cri.go:89] found id: ""
	I0120 14:05:06.317872 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.317883 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:06.317892 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:06.317962 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:06.365531 1971155 cri.go:89] found id: ""
	I0120 14:05:06.365574 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.365587 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:06.365601 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:06.365626 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:06.460369 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:06.460403 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:06.460422 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:06.541919 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:06.541967 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:06.588755 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:06.588805 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:06.648087 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:06.648140 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:09.166758 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:09.184071 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:09.184193 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:09.222998 1971155 cri.go:89] found id: ""
	I0120 14:05:09.223035 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.223048 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:09.223056 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:09.223140 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:09.275875 1971155 cri.go:89] found id: ""
	I0120 14:05:09.275912 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.275926 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:09.275934 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:09.276006 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:09.320157 1971155 cri.go:89] found id: ""
	I0120 14:05:09.320192 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.320210 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:09.320218 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:09.320309 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:09.366463 1971155 cri.go:89] found id: ""
	I0120 14:05:09.366496 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.366505 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:09.366511 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:09.366582 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:09.414645 1971155 cri.go:89] found id: ""
	I0120 14:05:09.414675 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.414683 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:09.414689 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:09.414758 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:09.474004 1971155 cri.go:89] found id: ""
	I0120 14:05:09.474047 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.474059 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:09.474068 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:09.474153 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:09.536187 1971155 cri.go:89] found id: ""
	I0120 14:05:09.536217 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.536224 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:09.536230 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:09.536289 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:09.574100 1971155 cri.go:89] found id: ""
	I0120 14:05:09.574134 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.574142 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:09.574154 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:09.574167 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:09.620881 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:09.620923 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:09.676117 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:09.676177 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:09.692431 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:09.692473 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:09.768800 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:09.768831 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:09.768851 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:12.350771 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:12.365286 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:12.365374 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:12.402924 1971155 cri.go:89] found id: ""
	I0120 14:05:12.402966 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.402978 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:12.402998 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:12.403073 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:12.442108 1971155 cri.go:89] found id: ""
	I0120 14:05:12.442138 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.442147 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:12.442154 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:12.442211 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:12.484002 1971155 cri.go:89] found id: ""
	I0120 14:05:12.484058 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.484071 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:12.484078 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:12.484149 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:12.524060 1971155 cri.go:89] found id: ""
	I0120 14:05:12.524097 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.524109 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:12.524118 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:12.524201 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:12.563120 1971155 cri.go:89] found id: ""
	I0120 14:05:12.563147 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.563156 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:12.563163 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:12.563232 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:12.604782 1971155 cri.go:89] found id: ""
	I0120 14:05:12.604824 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.604838 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:12.604847 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:12.604925 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:12.642278 1971155 cri.go:89] found id: ""
	I0120 14:05:12.642305 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.642316 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:12.642326 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:12.642391 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:12.682274 1971155 cri.go:89] found id: ""
	I0120 14:05:12.682311 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.682323 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:12.682337 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:12.682353 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:12.773735 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:12.773785 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:12.825008 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:12.825049 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:12.873999 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:12.874042 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:12.888767 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:12.888804 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:12.965739 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:15.466957 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:15.493756 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:15.493839 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:15.538680 1971155 cri.go:89] found id: ""
	I0120 14:05:15.538709 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.538717 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:15.538724 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:15.538783 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:15.583029 1971155 cri.go:89] found id: ""
	I0120 14:05:15.583069 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.583081 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:15.583089 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:15.583174 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:15.623762 1971155 cri.go:89] found id: ""
	I0120 14:05:15.623801 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.623815 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:15.623825 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:15.623903 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:15.663883 1971155 cri.go:89] found id: ""
	I0120 14:05:15.663921 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.663930 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:15.663938 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:15.664013 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:15.701723 1971155 cri.go:89] found id: ""
	I0120 14:05:15.701758 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.701769 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:15.701778 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:15.701847 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:15.741612 1971155 cri.go:89] found id: ""
	I0120 14:05:15.741649 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.741661 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:15.741670 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:15.741736 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:15.783225 1971155 cri.go:89] found id: ""
	I0120 14:05:15.783257 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.783267 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:15.783275 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:15.783353 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:15.823664 1971155 cri.go:89] found id: ""
	I0120 14:05:15.823699 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.823713 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:15.823725 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:15.823740 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:15.876890 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:15.876936 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:15.892034 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:15.892077 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:15.967939 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:15.967966 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:15.967982 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:16.049913 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:16.049961 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:18.599849 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:18.613686 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:18.613756 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:18.656070 1971155 cri.go:89] found id: ""
	I0120 14:05:18.656104 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.656113 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:18.656120 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:18.656184 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:18.694391 1971155 cri.go:89] found id: ""
	I0120 14:05:18.694420 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.694429 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:18.694435 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:18.694499 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:18.733057 1971155 cri.go:89] found id: ""
	I0120 14:05:18.733094 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.733107 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:18.733114 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:18.733187 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:18.770955 1971155 cri.go:89] found id: ""
	I0120 14:05:18.770985 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.770993 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:18.770998 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:18.771065 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:18.805878 1971155 cri.go:89] found id: ""
	I0120 14:05:18.805912 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.805924 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:18.805932 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:18.806015 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:18.843859 1971155 cri.go:89] found id: ""
	I0120 14:05:18.843891 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.843904 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:18.843912 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:18.843981 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:18.882554 1971155 cri.go:89] found id: ""
	I0120 14:05:18.882585 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.882594 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:18.882622 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:18.882686 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:18.919206 1971155 cri.go:89] found id: ""
	I0120 14:05:18.919242 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.919258 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:18.919269 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:18.919284 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:18.969428 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:18.969476 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:18.984666 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:18.984702 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:19.060472 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:19.060502 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:19.060517 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:19.136205 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:19.136248 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:21.681437 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:21.695755 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:21.695840 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:21.732554 1971155 cri.go:89] found id: ""
	I0120 14:05:21.732587 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.732599 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:21.732609 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:21.732680 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:21.771047 1971155 cri.go:89] found id: ""
	I0120 14:05:21.771078 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.771087 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:21.771093 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:21.771149 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:21.806053 1971155 cri.go:89] found id: ""
	I0120 14:05:21.806084 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.806096 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:21.806104 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:21.806176 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:21.843647 1971155 cri.go:89] found id: ""
	I0120 14:05:21.843679 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.843692 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:21.843699 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:21.843767 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:21.878399 1971155 cri.go:89] found id: ""
	I0120 14:05:21.878437 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.878449 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:21.878458 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:21.878531 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:21.912712 1971155 cri.go:89] found id: ""
	I0120 14:05:21.912746 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.912757 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:21.912770 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:21.912842 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:21.948182 1971155 cri.go:89] found id: ""
	I0120 14:05:21.948214 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.948225 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:21.948241 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:21.948311 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:21.987907 1971155 cri.go:89] found id: ""
	I0120 14:05:21.987945 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.987954 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:21.987964 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:21.987977 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:22.037198 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:22.037244 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:22.053238 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:22.053293 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:22.125680 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:22.125703 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:22.125721 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:22.208323 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:22.208371 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:24.752796 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:24.769865 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:24.769967 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:24.809247 1971155 cri.go:89] found id: ""
	I0120 14:05:24.809282 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.809293 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:24.809305 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:24.809378 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:24.849761 1971155 cri.go:89] found id: ""
	I0120 14:05:24.849788 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.849797 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:24.849803 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:24.849867 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:24.892195 1971155 cri.go:89] found id: ""
	I0120 14:05:24.892226 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.892239 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:24.892249 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:24.892315 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:24.935367 1971155 cri.go:89] found id: ""
	I0120 14:05:24.935400 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.935412 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:24.935420 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:24.935488 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:24.980132 1971155 cri.go:89] found id: ""
	I0120 14:05:24.980164 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.980179 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:24.980188 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:24.980269 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:25.017365 1971155 cri.go:89] found id: ""
	I0120 14:05:25.017394 1971155 logs.go:282] 0 containers: []
	W0120 14:05:25.017405 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:25.017413 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:25.017487 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:25.059078 1971155 cri.go:89] found id: ""
	I0120 14:05:25.059115 1971155 logs.go:282] 0 containers: []
	W0120 14:05:25.059127 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:25.059163 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:25.059276 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:25.099507 1971155 cri.go:89] found id: ""
	I0120 14:05:25.099545 1971155 logs.go:282] 0 containers: []
	W0120 14:05:25.099557 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:25.099571 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:25.099588 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:25.174356 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:25.174385 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:25.174412 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:25.260260 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:25.260303 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:25.304309 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:25.304342 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:25.358340 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:25.358388 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:27.876603 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:27.892994 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:27.893071 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:27.931991 1971155 cri.go:89] found id: ""
	I0120 14:05:27.932048 1971155 logs.go:282] 0 containers: []
	W0120 14:05:27.932060 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:27.932068 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:27.932139 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:27.968882 1971155 cri.go:89] found id: ""
	I0120 14:05:27.968917 1971155 logs.go:282] 0 containers: []
	W0120 14:05:27.968926 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:27.968933 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:27.968998 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:28.009604 1971155 cri.go:89] found id: ""
	I0120 14:05:28.009635 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.009644 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:28.009650 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:28.009708 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:28.050036 1971155 cri.go:89] found id: ""
	I0120 14:05:28.050069 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.050080 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:28.050087 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:28.050156 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:28.092348 1971155 cri.go:89] found id: ""
	I0120 14:05:28.092392 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.092427 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:28.092436 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:28.092512 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:28.133751 1971155 cri.go:89] found id: ""
	I0120 14:05:28.133787 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.133796 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:28.133804 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:28.133875 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:28.177231 1971155 cri.go:89] found id: ""
	I0120 14:05:28.177268 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.177280 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:28.177288 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:28.177382 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:28.217125 1971155 cri.go:89] found id: ""
	I0120 14:05:28.217160 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.217175 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:28.217189 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:28.217207 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:28.305446 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:28.305480 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:28.305498 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:28.389940 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:28.389996 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:28.445472 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:28.445519 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:28.503281 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:28.503343 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:31.023457 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:31.039576 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:31.039665 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:31.090049 1971155 cri.go:89] found id: ""
	I0120 14:05:31.090086 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.090099 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:31.090108 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:31.090199 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:31.129134 1971155 cri.go:89] found id: ""
	I0120 14:05:31.129168 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.129180 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:31.129189 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:31.129246 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:31.169790 1971155 cri.go:89] found id: ""
	I0120 14:05:31.169822 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.169834 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:31.169845 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:31.169940 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:31.210981 1971155 cri.go:89] found id: ""
	I0120 14:05:31.211017 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.211030 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:31.211039 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:31.211126 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:31.254051 1971155 cri.go:89] found id: ""
	I0120 14:05:31.254081 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.254089 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:31.254096 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:31.254175 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:31.301717 1971155 cri.go:89] found id: ""
	I0120 14:05:31.301750 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.301772 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:31.301782 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:31.301847 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:31.343204 1971155 cri.go:89] found id: ""
	I0120 14:05:31.343233 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.343242 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:31.343248 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:31.343304 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:31.382466 1971155 cri.go:89] found id: ""
	I0120 14:05:31.382501 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.382512 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:31.382525 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:31.382544 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:31.461732 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:31.461781 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:31.461801 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:31.559483 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:31.559566 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:31.606795 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:31.606833 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:31.661423 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:31.661468 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:34.179481 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:34.195424 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:34.195496 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:34.236592 1971155 cri.go:89] found id: ""
	I0120 14:05:34.236623 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.236632 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:34.236639 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:34.236696 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:34.275803 1971155 cri.go:89] found id: ""
	I0120 14:05:34.275836 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.275848 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:34.275855 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:34.275944 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:34.315900 1971155 cri.go:89] found id: ""
	I0120 14:05:34.315932 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.315944 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:34.315952 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:34.316019 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:34.353614 1971155 cri.go:89] found id: ""
	I0120 14:05:34.353646 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.353655 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:34.353661 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:34.353735 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:34.395635 1971155 cri.go:89] found id: ""
	I0120 14:05:34.395673 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.395685 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:34.395698 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:34.395782 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:34.435631 1971155 cri.go:89] found id: ""
	I0120 14:05:34.435662 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.435672 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:34.435678 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:34.435742 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:34.474904 1971155 cri.go:89] found id: ""
	I0120 14:05:34.474940 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.474952 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:34.474960 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:34.475030 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:34.513643 1971155 cri.go:89] found id: ""
	I0120 14:05:34.513675 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.513688 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:34.513701 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:34.513719 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:34.531525 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:34.531559 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:34.614600 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:34.614649 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:34.614667 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:34.691236 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:34.691282 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:34.739567 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:34.739616 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:37.294798 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:37.313219 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:37.313309 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:37.360355 1971155 cri.go:89] found id: ""
	I0120 14:05:37.360392 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.360406 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:37.360415 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:37.360493 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:37.400427 1971155 cri.go:89] found id: ""
	I0120 14:05:37.400456 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.400466 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:37.400475 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:37.400535 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:37.472778 1971155 cri.go:89] found id: ""
	I0120 14:05:37.472800 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.472807 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:37.472814 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:37.472861 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:37.514813 1971155 cri.go:89] found id: ""
	I0120 14:05:37.514836 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.514846 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:37.514853 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:37.514912 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:37.559689 1971155 cri.go:89] found id: ""
	I0120 14:05:37.559724 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.559735 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:37.559768 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:37.559851 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:37.604249 1971155 cri.go:89] found id: ""
	I0120 14:05:37.604279 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.604291 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:37.604299 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:37.604372 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:37.655652 1971155 cri.go:89] found id: ""
	I0120 14:05:37.655689 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.655702 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:37.655710 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:37.655780 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:37.699626 1971155 cri.go:89] found id: ""
	I0120 14:05:37.699663 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.699677 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:37.699690 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:37.699706 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:37.761041 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:37.761105 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:37.789894 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:37.789933 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:37.870389 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:37.870424 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:37.870444 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:37.953788 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:37.953828 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:40.507832 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:40.526389 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:40.526479 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:40.564969 1971155 cri.go:89] found id: ""
	I0120 14:05:40.565007 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.565019 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:40.565028 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:40.565102 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:40.610815 1971155 cri.go:89] found id: ""
	I0120 14:05:40.610851 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.610863 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:40.610879 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:40.610950 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:40.656202 1971155 cri.go:89] found id: ""
	I0120 14:05:40.656241 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.656253 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:40.656261 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:40.656332 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:40.696520 1971155 cri.go:89] found id: ""
	I0120 14:05:40.696555 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.696567 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:40.696576 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:40.696655 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:40.741177 1971155 cri.go:89] found id: ""
	I0120 14:05:40.741213 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.741224 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:40.741232 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:40.741321 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:40.787423 1971155 cri.go:89] found id: ""
	I0120 14:05:40.787463 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.787476 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:40.787486 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:40.787560 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:40.838180 1971155 cri.go:89] found id: ""
	I0120 14:05:40.838217 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.838227 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:40.838235 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:40.838308 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:40.877888 1971155 cri.go:89] found id: ""
	I0120 14:05:40.877922 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.877934 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:40.877947 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:40.877962 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:40.942664 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:40.942718 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:40.960105 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:40.960147 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:41.038583 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:41.038640 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:41.038660 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:41.125430 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:41.125499 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:43.677350 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:43.695745 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:43.695838 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:43.746662 1971155 cri.go:89] found id: ""
	I0120 14:05:43.746695 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.746710 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:43.746718 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:43.746791 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:43.802111 1971155 cri.go:89] found id: ""
	I0120 14:05:43.802142 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.802154 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:43.802163 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:43.802234 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:43.849314 1971155 cri.go:89] found id: ""
	I0120 14:05:43.849351 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.849363 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:43.849371 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:43.849444 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:43.898242 1971155 cri.go:89] found id: ""
	I0120 14:05:43.898279 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.898293 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:43.898302 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:43.898384 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:43.939248 1971155 cri.go:89] found id: ""
	I0120 14:05:43.939282 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.939293 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:43.939302 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:43.939369 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:43.979271 1971155 cri.go:89] found id: ""
	I0120 14:05:43.979307 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.979327 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:43.979336 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:43.979408 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:44.016351 1971155 cri.go:89] found id: ""
	I0120 14:05:44.016387 1971155 logs.go:282] 0 containers: []
	W0120 14:05:44.016400 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:44.016409 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:44.016479 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:44.060965 1971155 cri.go:89] found id: ""
	I0120 14:05:44.061005 1971155 logs.go:282] 0 containers: []
	W0120 14:05:44.061017 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:44.061032 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:44.061050 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:44.076017 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:44.076070 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:44.159732 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:44.159761 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:44.159775 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:44.240721 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:44.240769 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:44.285018 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:44.285061 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:46.839125 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:46.856748 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:46.856841 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:46.908851 1971155 cri.go:89] found id: ""
	I0120 14:05:46.908886 1971155 logs.go:282] 0 containers: []
	W0120 14:05:46.908898 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:46.908909 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:46.908978 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:46.949810 1971155 cri.go:89] found id: ""
	I0120 14:05:46.949865 1971155 logs.go:282] 0 containers: []
	W0120 14:05:46.949879 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:46.949887 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:46.949969 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:46.995158 1971155 cri.go:89] found id: ""
	I0120 14:05:46.995191 1971155 logs.go:282] 0 containers: []
	W0120 14:05:46.995201 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:46.995212 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:46.995284 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:47.034872 1971155 cri.go:89] found id: ""
	I0120 14:05:47.034905 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.034916 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:47.034924 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:47.034993 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:47.077500 1971155 cri.go:89] found id: ""
	I0120 14:05:47.077529 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.077537 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:47.077544 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:47.077608 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:47.118996 1971155 cri.go:89] found id: ""
	I0120 14:05:47.119027 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.119048 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:47.119059 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:47.119126 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:47.159902 1971155 cri.go:89] found id: ""
	I0120 14:05:47.159931 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.159943 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:47.159952 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:47.160027 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:47.201895 1971155 cri.go:89] found id: ""
	I0120 14:05:47.201922 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.201930 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:47.201942 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:47.201957 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:47.244852 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:47.244888 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:47.297439 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:47.297486 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:47.313519 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:47.313558 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:47.389340 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:47.389372 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:47.389391 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:49.969003 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:49.983821 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:49.983904 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:50.024496 1971155 cri.go:89] found id: ""
	I0120 14:05:50.024525 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.024536 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:50.024545 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:50.024611 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:50.066376 1971155 cri.go:89] found id: ""
	I0120 14:05:50.066408 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.066416 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:50.066423 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:50.066497 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:50.106918 1971155 cri.go:89] found id: ""
	I0120 14:05:50.107034 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.107055 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:50.107065 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:50.107154 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:50.154846 1971155 cri.go:89] found id: ""
	I0120 14:05:50.154940 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.154962 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:50.154981 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:50.155095 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:50.228177 1971155 cri.go:89] found id: ""
	I0120 14:05:50.228218 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.228238 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:50.228249 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:50.228334 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:50.294106 1971155 cri.go:89] found id: ""
	I0120 14:05:50.294145 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.294158 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:50.294167 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:50.294242 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:50.340312 1971155 cri.go:89] found id: ""
	I0120 14:05:50.340357 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.340368 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:50.340375 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:50.340450 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:50.384031 1971155 cri.go:89] found id: ""
	I0120 14:05:50.384070 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.384082 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:50.384095 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:50.384112 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:50.399361 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:50.399396 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:50.484820 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:50.484851 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:50.484868 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:50.594107 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:50.594171 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:50.647700 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:50.647740 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:53.213104 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:53.229463 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:53.229538 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:53.270860 1971155 cri.go:89] found id: ""
	I0120 14:05:53.270896 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.270909 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:53.270917 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:53.270977 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:53.311721 1971155 cri.go:89] found id: ""
	I0120 14:05:53.311748 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.311757 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:53.311764 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:53.311818 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:53.350019 1971155 cri.go:89] found id: ""
	I0120 14:05:53.350053 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.350064 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:53.350073 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:53.350144 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:53.386955 1971155 cri.go:89] found id: ""
	I0120 14:05:53.386982 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.386990 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:53.386996 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:53.387059 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:53.427056 1971155 cri.go:89] found id: ""
	I0120 14:05:53.427096 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.427105 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:53.427112 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:53.427170 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:53.468506 1971155 cri.go:89] found id: ""
	I0120 14:05:53.468546 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.468559 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:53.468568 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:53.468642 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:53.505884 1971155 cri.go:89] found id: ""
	I0120 14:05:53.505926 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.505938 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:53.505948 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:53.506024 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:53.547189 1971155 cri.go:89] found id: ""
	I0120 14:05:53.547232 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.547244 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:53.547258 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:53.547282 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:53.629525 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:53.629559 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:53.629577 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:53.711943 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:53.711994 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:53.761408 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:53.761442 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:53.815735 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:53.815781 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:56.332189 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:56.347525 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:56.347622 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:56.389104 1971155 cri.go:89] found id: ""
	I0120 14:05:56.389145 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.389156 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:56.389165 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:56.389244 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:56.427108 1971155 cri.go:89] found id: ""
	I0120 14:05:56.427151 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.427163 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:56.427173 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:56.427252 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:56.473424 1971155 cri.go:89] found id: ""
	I0120 14:05:56.473457 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.473469 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:56.473477 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:56.473560 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:56.513450 1971155 cri.go:89] found id: ""
	I0120 14:05:56.513485 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.513495 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:56.513504 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:56.513578 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:56.562482 1971155 cri.go:89] found id: ""
	I0120 14:05:56.562533 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.562546 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:56.562554 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:56.562652 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:56.604745 1971155 cri.go:89] found id: ""
	I0120 14:05:56.604776 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.604787 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:56.604795 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:56.604867 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:56.645202 1971155 cri.go:89] found id: ""
	I0120 14:05:56.645245 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.645259 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:56.645268 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:56.645343 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:56.686351 1971155 cri.go:89] found id: ""
	I0120 14:05:56.686379 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.686388 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:56.686405 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:56.686419 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:56.700157 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:56.700206 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:56.780260 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:56.780289 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:56.780306 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:56.859551 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:56.859590 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:56.900940 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:56.900970 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:59.457051 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:59.472587 1971155 kubeadm.go:597] duration metric: took 4m3.227513478s to restartPrimaryControlPlane
	W0120 14:05:59.472685 1971155 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:05:59.472723 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:06:01.310474 1971155 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.837720995s)
	I0120 14:06:01.310572 1971155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:06:01.327408 1971155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:06:01.339235 1971155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:06:01.350183 1971155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:06:01.350209 1971155 kubeadm.go:157] found existing configuration files:
	
	I0120 14:06:01.350259 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:06:01.361183 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:06:01.361270 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:06:01.372352 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:06:01.382976 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:06:01.383040 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:06:01.394492 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:06:01.405628 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:06:01.405694 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:06:01.417040 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:06:01.428807 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:06:01.428872 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:06:01.441345 1971155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:06:01.698918 1971155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:07:57.893064 1971155 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 14:07:57.893206 1971155 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 14:07:57.895047 1971155 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 14:07:57.895110 1971155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:07:57.895204 1971155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:07:57.895358 1971155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:07:57.895455 1971155 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 14:07:57.895510 1971155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:07:57.897667 1971155 out.go:235]   - Generating certificates and keys ...
	I0120 14:07:57.897769 1971155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:07:57.897859 1971155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:07:57.897979 1971155 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:07:57.898089 1971155 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:07:57.898184 1971155 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:07:57.898261 1971155 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:07:57.898370 1971155 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:07:57.898473 1971155 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:07:57.898549 1971155 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:07:57.898650 1971155 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:07:57.898706 1971155 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:07:57.898808 1971155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:07:57.898866 1971155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:07:57.898917 1971155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:07:57.898971 1971155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:07:57.899018 1971155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:07:57.899156 1971155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:07:57.899270 1971155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:07:57.899322 1971155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:07:57.899385 1971155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:07:57.900907 1971155 out.go:235]   - Booting up control plane ...
	I0120 14:07:57.901012 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:07:57.901098 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:07:57.901183 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:07:57.901301 1971155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:07:57.901498 1971155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 14:07:57.901549 1971155 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 14:07:57.901614 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.901802 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.901862 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.902008 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.902071 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.902248 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.902332 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.902476 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.902532 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.902723 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.902740 1971155 kubeadm.go:310] 
	I0120 14:07:57.902798 1971155 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 14:07:57.902913 1971155 kubeadm.go:310] 		timed out waiting for the condition
	I0120 14:07:57.902942 1971155 kubeadm.go:310] 
	I0120 14:07:57.902990 1971155 kubeadm.go:310] 	This error is likely caused by:
	I0120 14:07:57.903050 1971155 kubeadm.go:310] 		- The kubelet is not running
	I0120 14:07:57.903175 1971155 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 14:07:57.903185 1971155 kubeadm.go:310] 
	I0120 14:07:57.903309 1971155 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 14:07:57.903358 1971155 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 14:07:57.903406 1971155 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 14:07:57.903415 1971155 kubeadm.go:310] 
	I0120 14:07:57.903535 1971155 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 14:07:57.903608 1971155 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 14:07:57.903614 1971155 kubeadm.go:310] 
	I0120 14:07:57.903742 1971155 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 14:07:57.903828 1971155 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 14:07:57.903894 1971155 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 14:07:57.903959 1971155 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 14:07:57.903970 1971155 kubeadm.go:310] 
	W0120 14:07:57.904147 1971155 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0120 14:07:57.904205 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:07:58.379343 1971155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:07:58.394094 1971155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:07:58.405184 1971155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:07:58.405214 1971155 kubeadm.go:157] found existing configuration files:
	
	I0120 14:07:58.405275 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:07:58.415126 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:07:58.415190 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:07:58.425525 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:07:58.435286 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:07:58.435402 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:07:58.445346 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:07:58.455338 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:07:58.455400 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:07:58.465346 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:07:58.474739 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:07:58.474821 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:07:58.484664 1971155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:07:58.559434 1971155 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 14:07:58.559546 1971155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:07:58.713832 1971155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:07:58.713978 1971155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:07:58.714110 1971155 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 14:07:58.902142 1971155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:07:58.904151 1971155 out.go:235]   - Generating certificates and keys ...
	I0120 14:07:58.904252 1971155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:07:58.904340 1971155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:07:58.904451 1971155 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:07:58.904532 1971155 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:07:58.904662 1971155 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:07:58.904752 1971155 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:07:58.904850 1971155 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:07:58.904938 1971155 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:07:58.905078 1971155 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:07:58.905203 1971155 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:07:58.905255 1971155 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:07:58.905311 1971155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:07:59.059284 1971155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:07:59.367307 1971155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:07:59.478773 1971155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:07:59.769599 1971155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:07:59.795017 1971155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:07:59.796387 1971155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:07:59.796440 1971155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:07:59.967182 1971155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:07:59.969049 1971155 out.go:235]   - Booting up control plane ...
	I0120 14:07:59.969210 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:07:59.969498 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:07:59.978995 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:07:59.980298 1971155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:07:59.983629 1971155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 14:08:39.986873 1971155 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 14:08:39.986972 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:08:39.987222 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:08:44.987592 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:08:44.987868 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:08:54.988530 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:08:54.988725 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:09:14.990244 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:09:14.990492 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:09:54.990993 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:09:54.991340 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:09:54.991370 1971155 kubeadm.go:310] 
	I0120 14:09:54.991419 1971155 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 14:09:54.991474 1971155 kubeadm.go:310] 		timed out waiting for the condition
	I0120 14:09:54.991485 1971155 kubeadm.go:310] 
	I0120 14:09:54.991536 1971155 kubeadm.go:310] 	This error is likely caused by:
	I0120 14:09:54.991582 1971155 kubeadm.go:310] 		- The kubelet is not running
	I0120 14:09:54.991734 1971155 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 14:09:54.991760 1971155 kubeadm.go:310] 
	I0120 14:09:54.991930 1971155 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 14:09:54.991981 1971155 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 14:09:54.992034 1971155 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 14:09:54.992065 1971155 kubeadm.go:310] 
	I0120 14:09:54.992234 1971155 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 14:09:54.992326 1971155 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 14:09:54.992342 1971155 kubeadm.go:310] 
	I0120 14:09:54.992508 1971155 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 14:09:54.992650 1971155 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 14:09:54.992786 1971155 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 14:09:54.992894 1971155 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 14:09:54.992904 1971155 kubeadm.go:310] 
	I0120 14:09:54.994025 1971155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:09:54.994123 1971155 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 14:09:54.994214 1971155 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 14:09:54.994325 1971155 kubeadm.go:394] duration metric: took 7m58.806679255s to StartCluster
	I0120 14:09:54.994398 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:09:54.994475 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:09:55.044299 1971155 cri.go:89] found id: ""
	I0120 14:09:55.044338 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.044350 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:09:55.044359 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:09:55.044434 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:09:55.088726 1971155 cri.go:89] found id: ""
	I0120 14:09:55.088759 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.088767 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:09:55.088774 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:09:55.088848 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:09:55.127484 1971155 cri.go:89] found id: ""
	I0120 14:09:55.127513 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.127523 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:09:55.127531 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:09:55.127602 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:09:55.167042 1971155 cri.go:89] found id: ""
	I0120 14:09:55.167079 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.167091 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:09:55.167100 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:09:55.167173 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:09:55.206075 1971155 cri.go:89] found id: ""
	I0120 14:09:55.206111 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.206122 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:09:55.206128 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:09:55.206184 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:09:55.262849 1971155 cri.go:89] found id: ""
	I0120 14:09:55.262895 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.262907 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:09:55.262917 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:09:55.262989 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:09:55.303064 1971155 cri.go:89] found id: ""
	I0120 14:09:55.303102 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.303114 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:09:55.303122 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:09:55.303190 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:09:55.339202 1971155 cri.go:89] found id: ""
	I0120 14:09:55.339237 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.339248 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:09:55.339262 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:09:55.339279 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:09:55.425991 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:09:55.426022 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:09:55.426042 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:09:55.529413 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:09:55.529454 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:09:55.574927 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:09:55.574965 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:09:55.631464 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:09:55.631507 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0120 14:09:55.647055 1971155 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0120 14:09:55.647121 1971155 out.go:270] * 
	* 
	W0120 14:09:55.647197 1971155 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 14:09:55.647230 1971155 out.go:270] * 
	* 
	W0120 14:09:55.648431 1971155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 14:09:55.652120 1971155 out.go:201] 
	W0120 14:09:55.653811 1971155 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 14:09:55.653880 1971155 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0120 14:09:55.653909 1971155 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0120 14:09:55.655598 1971155 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-191446 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191446 -n old-k8s-version-191446
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191446 -n old-k8s-version-191446: exit status 2 (254.065941ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-191446 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-191446 logs -n 25: (1.270505286s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:56 UTC | 20 Jan 25 13:57 UTC |
	| start   | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-038404                              | cert-expiration-038404       | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:57 UTC |
	| start   | -p embed-certs-647109                                  | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-648067             | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-648067                                   | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 13:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-955986 | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 13:58 UTC |
	|         | disable-driver-mounts-955986                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 13:59 UTC |
	|         | default-k8s-diff-port-727256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-647109            | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 13:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-647109                                  | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 14:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-648067                  | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC | 20 Jan 25 13:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-648067                                   | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-191446        | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-727256  | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC | 20 Jan 25 13:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC | 20 Jan 25 14:01 UTC |
	|         | default-k8s-diff-port-727256                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-647109                 | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 14:00 UTC | 20 Jan 25 14:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-647109                                  | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 14:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-191446                              | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC | 20 Jan 25 14:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-191446             | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC | 20 Jan 25 14:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-191446                              | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-727256       | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC | 20 Jan 25 14:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC |                     |
	|         | default-k8s-diff-port-727256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 14:01:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 14:01:30.648649 1971324 out.go:345] Setting OutFile to fd 1 ...
	I0120 14:01:30.648768 1971324 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:01:30.648777 1971324 out.go:358] Setting ErrFile to fd 2...
	I0120 14:01:30.648781 1971324 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:01:30.648971 1971324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 14:01:30.649563 1971324 out.go:352] Setting JSON to false
	I0120 14:01:30.650677 1971324 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":20637,"bootTime":1737361054,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 14:01:30.650808 1971324 start.go:139] virtualization: kvm guest
	I0120 14:01:30.653087 1971324 out.go:177] * [default-k8s-diff-port-727256] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 14:01:30.654902 1971324 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 14:01:30.654958 1971324 notify.go:220] Checking for updates...
	I0120 14:01:30.657200 1971324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 14:01:30.658358 1971324 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:01:30.659540 1971324 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 14:01:30.660755 1971324 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 14:01:30.662124 1971324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 14:01:30.664066 1971324 config.go:182] Loaded profile config "default-k8s-diff-port-727256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:01:30.664694 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:01:30.664783 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:01:30.683363 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46503
	I0120 14:01:30.684660 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:01:30.685421 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:01:30.685453 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:01:30.685849 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:01:30.686136 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:01:30.686482 1971324 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 14:01:30.686962 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:01:30.687017 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:01:30.705214 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33219
	I0120 14:01:30.705778 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:01:30.706464 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:01:30.706496 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:01:30.706910 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:01:30.707413 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:01:30.748140 1971324 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 14:01:30.749542 1971324 start.go:297] selected driver: kvm2
	I0120 14:01:30.749575 1971324 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-727256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8
s-diff-port-727256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:01:30.749732 1971324 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 14:01:30.750471 1971324 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:01:30.750569 1971324 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-1920423/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 14:01:30.769419 1971324 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 14:01:30.769920 1971324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 14:01:30.769962 1971324 cni.go:84] Creating CNI manager for ""
	I0120 14:01:30.770026 1971324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:01:30.770087 1971324 start.go:340] cluster config:
	{Name:default-k8s-diff-port-727256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-727256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:01:30.770203 1971324 iso.go:125] acquiring lock: {Name:mk18c81b0efa5cc8efe9d47dc52684752f206ae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:01:30.772094 1971324 out.go:177] * Starting "default-k8s-diff-port-727256" primary control-plane node in "default-k8s-diff-port-727256" cluster
	I0120 14:01:27.567956 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .Start
	I0120 14:01:27.568241 1971155 main.go:141] libmachine: (old-k8s-version-191446) starting domain...
	I0120 14:01:27.568273 1971155 main.go:141] libmachine: (old-k8s-version-191446) ensuring networks are active...
	I0120 14:01:27.569283 1971155 main.go:141] libmachine: (old-k8s-version-191446) Ensuring network default is active
	I0120 14:01:27.569742 1971155 main.go:141] libmachine: (old-k8s-version-191446) Ensuring network mk-old-k8s-version-191446 is active
	I0120 14:01:27.570107 1971155 main.go:141] libmachine: (old-k8s-version-191446) getting domain XML...
	I0120 14:01:27.570794 1971155 main.go:141] libmachine: (old-k8s-version-191446) creating domain...
	I0120 14:01:28.844259 1971155 main.go:141] libmachine: (old-k8s-version-191446) waiting for IP...
	I0120 14:01:28.845169 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:28.845736 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:28.845869 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:28.845749 1971190 retry.go:31] will retry after 249.093991ms: waiting for domain to come up
	I0120 14:01:29.096266 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:29.096835 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:29.096870 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:29.096778 1971190 retry.go:31] will retry after 285.937419ms: waiting for domain to come up
	I0120 14:01:29.384654 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:29.385227 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:29.385260 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:29.385184 1971190 retry.go:31] will retry after 403.444594ms: waiting for domain to come up
	I0120 14:01:29.789819 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:29.790466 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:29.790516 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:29.790442 1971190 retry.go:31] will retry after 525.904837ms: waiting for domain to come up
	I0120 14:01:30.361342 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:30.361758 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:30.361799 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:30.361742 1971190 retry.go:31] will retry after 498.844656ms: waiting for domain to come up
	I0120 14:01:30.862672 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:30.863328 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:30.863359 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:30.863284 1971190 retry.go:31] will retry after 695.176765ms: waiting for domain to come up
	I0120 14:01:31.559994 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:31.560418 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:31.560483 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:31.560423 1971190 retry.go:31] will retry after 1.138767233s: waiting for domain to come up
	I0120 14:01:29.280435 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:31.281034 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:33.778046 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:32.686925 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:35.185223 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:30.773441 1971324 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 14:01:30.773503 1971324 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 14:01:30.773514 1971324 cache.go:56] Caching tarball of preloaded images
	I0120 14:01:30.773638 1971324 preload.go:172] Found /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 14:01:30.773650 1971324 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 14:01:30.773755 1971324 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/config.json ...
	I0120 14:01:30.774002 1971324 start.go:360] acquireMachinesLock for default-k8s-diff-port-727256: {Name:mkbf148c6fec0c722eed081be3cef9d0990100c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 14:01:32.700822 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:32.701293 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:32.701323 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:32.701238 1971190 retry.go:31] will retry after 1.039348308s: waiting for domain to come up
	I0120 14:01:33.742152 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:33.742798 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:33.742827 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:33.742756 1971190 retry.go:31] will retry after 1.487881975s: waiting for domain to come up
	I0120 14:01:35.232385 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:35.232903 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:35.233000 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:35.232883 1971190 retry.go:31] will retry after 1.541170209s: waiting for domain to come up
	I0120 14:01:36.775877 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:36.776558 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:36.776586 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:36.776513 1971190 retry.go:31] will retry after 2.896053576s: waiting for domain to come up
	I0120 14:01:35.778385 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:37.778939 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:37.187266 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:39.686105 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:39.675363 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:39.675986 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:39.676021 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:39.675945 1971190 retry.go:31] will retry after 3.105341623s: waiting for domain to come up
	I0120 14:01:39.779284 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:42.278570 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:42.185136 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:44.686564 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:42.783450 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:42.783953 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:42.783979 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:42.783919 1971190 retry.go:31] will retry after 3.216558184s: waiting for domain to come up
	I0120 14:01:46.001813 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.002358 1971155 main.go:141] libmachine: (old-k8s-version-191446) found domain IP: 192.168.61.215
	I0120 14:01:46.002386 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has current primary IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.002392 1971155 main.go:141] libmachine: (old-k8s-version-191446) reserving static IP address...
	I0120 14:01:46.002890 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "old-k8s-version-191446", mac: "52:54:00:87:83:fb", ip: "192.168.61.215"} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.002913 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | skip adding static IP to network mk-old-k8s-version-191446 - found existing host DHCP lease matching {name: "old-k8s-version-191446", mac: "52:54:00:87:83:fb", ip: "192.168.61.215"}
	I0120 14:01:46.002961 1971155 main.go:141] libmachine: (old-k8s-version-191446) reserved static IP address 192.168.61.215 for domain old-k8s-version-191446
	I0120 14:01:46.003012 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | Getting to WaitForSSH function...
	I0120 14:01:46.003029 1971155 main.go:141] libmachine: (old-k8s-version-191446) waiting for SSH...
	I0120 14:01:46.005479 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.005822 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.005844 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.005930 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | Using SSH client type: external
	I0120 14:01:46.005974 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa (-rw-------)
	I0120 14:01:46.006012 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 14:01:46.006030 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | About to run SSH command:
	I0120 14:01:46.006042 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | exit 0
	I0120 14:01:46.134861 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | SSH cmd err, output: <nil>: 
	I0120 14:01:46.135287 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetConfigRaw
	I0120 14:01:46.135993 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 14:01:46.138498 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.138913 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.138949 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.139408 1971155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/config.json ...
	I0120 14:01:46.139628 1971155 machine.go:93] provisionDockerMachine start ...
	I0120 14:01:46.139648 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:46.139910 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.142776 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.143168 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.143196 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.143377 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.143551 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.143710 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.143884 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.144084 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:46.144287 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:46.144299 1971155 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 14:01:46.259874 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 14:01:46.259909 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetMachineName
	I0120 14:01:46.260184 1971155 buildroot.go:166] provisioning hostname "old-k8s-version-191446"
	I0120 14:01:46.260218 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetMachineName
	I0120 14:01:46.260442 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.263109 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.263469 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.263498 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.263608 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.263809 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.263964 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.264115 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.264263 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:46.264566 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:46.264598 1971155 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-191446 && echo "old-k8s-version-191446" | sudo tee /etc/hostname
	I0120 14:01:46.390733 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-191446
	
	I0120 14:01:46.390778 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.394086 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.394452 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.394495 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.394665 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.394902 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.395120 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.395312 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.395484 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:46.395721 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:46.395742 1971155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-191446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-191446/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-191446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 14:01:46.517398 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 14:01:46.517429 1971155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 14:01:46.517474 1971155 buildroot.go:174] setting up certificates
	I0120 14:01:46.517489 1971155 provision.go:84] configureAuth start
	I0120 14:01:46.517501 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetMachineName
	I0120 14:01:46.517852 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 14:01:46.520852 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.521243 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.521276 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.521419 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.523721 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.524173 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.524216 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.524323 1971155 provision.go:143] copyHostCerts
	I0120 14:01:46.524385 1971155 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem, removing ...
	I0120 14:01:46.524406 1971155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem
	I0120 14:01:46.524505 1971155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 14:01:46.524641 1971155 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem, removing ...
	I0120 14:01:46.524653 1971155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem
	I0120 14:01:46.524681 1971155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 14:01:46.524749 1971155 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem, removing ...
	I0120 14:01:46.524756 1971155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem
	I0120 14:01:46.524777 1971155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 14:01:46.524823 1971155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-191446 san=[127.0.0.1 192.168.61.215 localhost minikube old-k8s-version-191446]
	I0120 14:01:46.780575 1971155 provision.go:177] copyRemoteCerts
	I0120 14:01:46.780653 1971155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 14:01:46.780692 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.783791 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.784174 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.784204 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.784390 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.784667 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.784947 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.785129 1971155 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 14:01:46.873537 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 14:01:46.906323 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 14:01:46.934595 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 14:01:46.963136 1971155 provision.go:87] duration metric: took 445.630599ms to configureAuth
	I0120 14:01:46.963175 1971155 buildroot.go:189] setting minikube options for container-runtime
	I0120 14:01:46.963391 1971155 config.go:182] Loaded profile config "old-k8s-version-191446": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 14:01:46.963495 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.966539 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.966917 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.966953 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.967102 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.967316 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.967488 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.967694 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.967860 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:46.968110 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:46.968140 1971155 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 14:01:47.221729 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 14:01:47.221758 1971155 machine.go:96] duration metric: took 1.082115997s to provisionDockerMachine
	I0120 14:01:47.221770 1971155 start.go:293] postStartSetup for "old-k8s-version-191446" (driver="kvm2")
	I0120 14:01:47.221780 1971155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 14:01:47.221801 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.222156 1971155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 14:01:47.222213 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:47.225564 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.226024 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.226063 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.226226 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:47.226479 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.226678 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:47.226841 1971155 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 14:01:47.315044 1971155 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 14:01:47.319600 1971155 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 14:01:47.319630 1971155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 14:01:47.319699 1971155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 14:01:47.319785 1971155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem -> 19276722.pem in /etc/ssl/certs
	I0120 14:01:47.319880 1971155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 14:01:47.331251 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:01:47.359102 1971155 start.go:296] duration metric: took 137.311216ms for postStartSetup
	I0120 14:01:47.359156 1971155 fix.go:56] duration metric: took 19.814283548s for fixHost
	I0120 14:01:47.359184 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:47.362176 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.362643 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.362680 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.362916 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:47.363161 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.363352 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.363520 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:47.363693 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:47.363932 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:47.363948 1971155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 14:01:47.480212 1971324 start.go:364] duration metric: took 16.706172443s to acquireMachinesLock for "default-k8s-diff-port-727256"
	I0120 14:01:47.480300 1971324 start.go:96] Skipping create...Using existing machine configuration
	I0120 14:01:47.480313 1971324 fix.go:54] fixHost starting: 
	I0120 14:01:47.480706 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:01:47.480762 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:01:47.499438 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0120 14:01:47.499966 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:01:47.500523 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:01:47.500551 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:01:47.501028 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:01:47.501254 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:01:47.501413 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:01:47.503562 1971324 fix.go:112] recreateIfNeeded on default-k8s-diff-port-727256: state=Stopped err=<nil>
	I0120 14:01:47.503596 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	W0120 14:01:47.503774 1971324 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 14:01:47.505539 1971324 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-727256" ...
	I0120 14:01:44.778211 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:47.279184 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:47.480011 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381707.434903722
	
	I0120 14:01:47.480050 1971155 fix.go:216] guest clock: 1737381707.434903722
	I0120 14:01:47.480061 1971155 fix.go:229] Guest: 2025-01-20 14:01:47.434903722 +0000 UTC Remote: 2025-01-20 14:01:47.359160605 +0000 UTC m=+19.980745135 (delta=75.743117ms)
	I0120 14:01:47.480090 1971155 fix.go:200] guest clock delta is within tolerance: 75.743117ms
	I0120 14:01:47.480098 1971155 start.go:83] releasing machines lock for "old-k8s-version-191446", held for 19.935238773s
	I0120 14:01:47.480132 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.480450 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 14:01:47.483367 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.483792 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.483828 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.483945 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.484435 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.484606 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.484699 1971155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 14:01:47.484761 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:47.484899 1971155 ssh_runner.go:195] Run: cat /version.json
	I0120 14:01:47.484929 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:47.487568 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.487980 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.488011 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.488093 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.488211 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:47.488434 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.488591 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:47.488630 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.488653 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.488741 1971155 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 14:01:47.488862 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:47.489009 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.489153 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:47.489343 1971155 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 14:01:47.608326 1971155 ssh_runner.go:195] Run: systemctl --version
	I0120 14:01:47.614709 1971155 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 14:01:47.772139 1971155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 14:01:47.780427 1971155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 14:01:47.780502 1971155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 14:01:47.798266 1971155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 14:01:47.798304 1971155 start.go:495] detecting cgroup driver to use...
	I0120 14:01:47.798398 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 14:01:47.815867 1971155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 14:01:47.835855 1971155 docker.go:217] disabling cri-docker service (if available) ...
	I0120 14:01:47.835918 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 14:01:47.853481 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 14:01:47.869379 1971155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 14:01:47.988401 1971155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 14:01:48.193315 1971155 docker.go:233] disabling docker service ...
	I0120 14:01:48.193390 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 14:01:48.214201 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 14:01:48.230964 1971155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 14:01:48.377733 1971155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 14:01:48.516198 1971155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 14:01:48.533486 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 14:01:48.557115 1971155 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0120 14:01:48.557197 1971155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:01:48.570080 1971155 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 14:01:48.570162 1971155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:01:48.584225 1971155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:01:48.596995 1971155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:01:48.609663 1971155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 14:01:48.623942 1971155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 14:01:48.637099 1971155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 14:01:48.637171 1971155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 14:01:48.653873 1971155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 14:01:48.666171 1971155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:01:48.807308 1971155 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 14:01:48.914634 1971155 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 14:01:48.914731 1971155 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 14:01:48.920471 1971155 start.go:563] Will wait 60s for crictl version
	I0120 14:01:48.920558 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:48.924644 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 14:01:48.966008 1971155 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 14:01:48.966111 1971155 ssh_runner.go:195] Run: crio --version
	I0120 14:01:48.995639 1971155 ssh_runner.go:195] Run: crio --version
	I0120 14:01:49.031088 1971155 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0120 14:01:47.185914 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:49.187141 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:47.506801 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Start
	I0120 14:01:47.507007 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) starting domain...
	I0120 14:01:47.507037 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) ensuring networks are active...
	I0120 14:01:47.507737 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Ensuring network default is active
	I0120 14:01:47.508168 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Ensuring network mk-default-k8s-diff-port-727256 is active
	I0120 14:01:47.508707 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) getting domain XML...
	I0120 14:01:47.509515 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) creating domain...
	I0120 14:01:48.889668 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) waiting for IP...
	I0120 14:01:48.890857 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:48.891526 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:48.891694 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:48.891527 1971420 retry.go:31] will retry after 199.178216ms: waiting for domain to come up
	I0120 14:01:49.092132 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.092672 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.092706 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:49.092636 1971420 retry.go:31] will retry after 255.633273ms: waiting for domain to come up
	I0120 14:01:49.350430 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.351160 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.351194 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:49.351128 1971420 retry.go:31] will retry after 428.048868ms: waiting for domain to come up
	I0120 14:01:49.781110 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.781882 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.781964 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:49.781864 1971420 retry.go:31] will retry after 580.304151ms: waiting for domain to come up
	I0120 14:01:50.363965 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:50.364533 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:50.364559 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:50.364529 1971420 retry.go:31] will retry after 531.332191ms: waiting for domain to come up
	I0120 14:01:49.032269 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 14:01:49.035945 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:49.036382 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:49.036423 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:49.036733 1971155 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0120 14:01:49.041470 1971155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:01:49.055442 1971155 kubeadm.go:883] updating cluster {Name:old-k8s-version-191446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 14:01:49.055654 1971155 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 14:01:49.055738 1971155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:01:49.111537 1971155 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 14:01:49.111603 1971155 ssh_runner.go:195] Run: which lz4
	I0120 14:01:49.116646 1971155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 14:01:49.121632 1971155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 14:01:49.121670 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0120 14:01:51.019564 1971155 crio.go:462] duration metric: took 1.902969728s to copy over tarball
	I0120 14:01:51.019668 1971155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 14:01:49.280435 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:51.780700 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:51.189623 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:53.687386 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:50.897267 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:50.897845 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:50.897880 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:50.897808 1971420 retry.go:31] will retry after 772.118387ms: waiting for domain to come up
	I0120 14:01:51.671806 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:51.672432 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:51.672466 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:51.672381 1971420 retry.go:31] will retry after 1.060623833s: waiting for domain to come up
	I0120 14:01:52.735398 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:52.735986 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:52.736018 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:52.735943 1971420 retry.go:31] will retry after 1.002731806s: waiting for domain to come up
	I0120 14:01:53.740048 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:53.740702 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:53.740731 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:53.740659 1971420 retry.go:31] will retry after 1.680491712s: waiting for domain to come up
	I0120 14:01:55.423577 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:55.424100 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:55.424135 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:55.424031 1971420 retry.go:31] will retry after 1.794880075s: waiting for domain to come up
	I0120 14:01:54.192207 1971155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.172482213s)
	I0120 14:01:54.192247 1971155 crio.go:469] duration metric: took 3.172642787s to extract the tarball
	I0120 14:01:54.192257 1971155 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 14:01:54.241548 1971155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:01:54.283118 1971155 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 14:01:54.283147 1971155 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 14:01:54.283222 1971155 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:01:54.283246 1971155 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.283292 1971155 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.283311 1971155 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.283369 1971155 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0120 14:01:54.283369 1971155 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.283370 1971155 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.283429 1971155 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.285174 1971155 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:01:54.285194 1971155 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.285222 1971155 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.285232 1971155 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.285484 1971155 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.285533 1971155 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.285551 1971155 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0120 14:01:54.285520 1971155 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.443508 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.451962 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.459320 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.478139 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.482365 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.490130 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.491742 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0120 14:01:54.535842 1971155 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0120 14:01:54.535930 1971155 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.536008 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.556510 1971155 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0120 14:01:54.556563 1971155 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.556617 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.604701 1971155 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0120 14:01:54.604747 1971155 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.604801 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.648817 1971155 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0120 14:01:54.648847 1971155 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0120 14:01:54.648872 1971155 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.648887 1971155 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.648932 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.648951 1971155 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0120 14:01:54.648986 1971155 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.649059 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.648932 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.662210 1971155 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0120 14:01:54.662265 1971155 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0120 14:01:54.662271 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.662303 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.662304 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.662392 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.662403 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.666373 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.666427 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.784739 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:01:54.815550 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.815585 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 14:01:54.815637 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.815650 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.820367 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.820421 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.820459 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:55.000111 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:55.000218 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 14:01:55.013244 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:55.013276 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 14:01:55.013348 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:55.013372 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:55.015126 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:55.144073 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0120 14:01:55.144169 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0120 14:01:55.175966 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 14:01:55.175984 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0120 14:01:55.179810 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0120 14:01:55.179835 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0120 14:01:55.180076 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0120 14:01:55.216565 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0120 14:01:55.216646 1971155 cache_images.go:92] duration metric: took 933.479899ms to LoadCachedImages
	W0120 14:01:55.216768 1971155 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0120 14:01:55.216789 1971155 kubeadm.go:934] updating node { 192.168.61.215 8443 v1.20.0 crio true true} ...
	I0120 14:01:55.216907 1971155 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-191446 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 14:01:55.216973 1971155 ssh_runner.go:195] Run: crio config
	I0120 14:01:55.272348 1971155 cni.go:84] Creating CNI manager for ""
	I0120 14:01:55.272377 1971155 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:01:55.272387 1971155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 14:01:55.272407 1971155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.215 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-191446 NodeName:old-k8s-version-191446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 14:01:55.272581 1971155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-191446"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 14:01:55.272661 1971155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 14:01:55.285452 1971155 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 14:01:55.285532 1971155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 14:01:55.300604 1971155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0120 14:01:55.321434 1971155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 14:01:55.339855 1971155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0120 14:01:55.360605 1971155 ssh_runner.go:195] Run: grep 192.168.61.215	control-plane.minikube.internal$ /etc/hosts
	I0120 14:01:55.364977 1971155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:01:55.380053 1971155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:01:55.499744 1971155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:01:55.518232 1971155 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446 for IP: 192.168.61.215
	I0120 14:01:55.518267 1971155 certs.go:194] generating shared ca certs ...
	I0120 14:01:55.518300 1971155 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:01:55.518512 1971155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 14:01:55.518553 1971155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 14:01:55.518563 1971155 certs.go:256] generating profile certs ...
	I0120 14:01:55.571153 1971155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.key
	I0120 14:01:55.571288 1971155 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.key.d5f4b946
	I0120 14:01:55.571350 1971155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.key
	I0120 14:01:55.571517 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem (1338 bytes)
	W0120 14:01:55.571559 1971155 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672_empty.pem, impossibly tiny 0 bytes
	I0120 14:01:55.571570 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 14:01:55.571606 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 14:01:55.571641 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 14:01:55.571671 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 14:01:55.571733 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:01:55.572624 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 14:01:55.613349 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 14:01:55.645837 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 14:01:55.688637 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 14:01:55.736949 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 14:01:55.786459 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 14:01:55.833912 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 14:01:55.861615 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 14:01:55.891303 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem --> /usr/share/ca-certificates/1927672.pem (1338 bytes)
	I0120 14:01:55.920272 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /usr/share/ca-certificates/19276722.pem (1708 bytes)
	I0120 14:01:55.947553 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 14:01:55.979159 1971155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 14:01:56.002476 1971155 ssh_runner.go:195] Run: openssl version
	I0120 14:01:56.011075 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19276722.pem && ln -fs /usr/share/ca-certificates/19276722.pem /etc/ssl/certs/19276722.pem"
	I0120 14:01:56.026823 1971155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19276722.pem
	I0120 14:01:56.033320 1971155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:58 /usr/share/ca-certificates/19276722.pem
	I0120 14:01:56.033404 1971155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19276722.pem
	I0120 14:01:56.041787 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19276722.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 14:01:56.055968 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 14:01:56.072918 1971155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:01:56.078642 1971155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:01:56.078744 1971155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:01:56.085416 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 14:01:56.101948 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1927672.pem && ln -fs /usr/share/ca-certificates/1927672.pem /etc/ssl/certs/1927672.pem"
	I0120 14:01:56.117742 1971155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1927672.pem
	I0120 14:01:56.123020 1971155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:58 /usr/share/ca-certificates/1927672.pem
	I0120 14:01:56.123086 1971155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1927672.pem
	I0120 14:01:56.129661 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1927672.pem /etc/ssl/certs/51391683.0"
	I0120 14:01:56.142113 1971155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 14:01:56.147841 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 14:01:56.154627 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 14:01:56.161139 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 14:01:56.167754 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 14:01:56.174520 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 14:01:56.181204 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 14:01:56.187656 1971155 kubeadm.go:392] StartCluster: {Name:old-k8s-version-191446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:01:56.187767 1971155 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 14:01:56.187860 1971155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:01:56.233626 1971155 cri.go:89] found id: ""
	I0120 14:01:56.233718 1971155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 14:01:56.245027 1971155 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 14:01:56.245062 1971155 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 14:01:56.245126 1971155 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 14:01:56.258403 1971155 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 14:01:56.259211 1971155 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-191446" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:01:56.259525 1971155 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-1920423/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-191446" cluster setting kubeconfig missing "old-k8s-version-191446" context setting]
	I0120 14:01:56.260060 1971155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:01:56.288258 1971155 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 14:01:56.302812 1971155 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.215
	I0120 14:01:56.302855 1971155 kubeadm.go:1160] stopping kube-system containers ...
	I0120 14:01:56.302872 1971155 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 14:01:56.302942 1971155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:01:56.343694 1971155 cri.go:89] found id: ""
	I0120 14:01:56.343794 1971155 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 14:01:56.364228 1971155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:01:56.375163 1971155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:01:56.375187 1971155 kubeadm.go:157] found existing configuration files:
	
	I0120 14:01:56.375260 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:01:56.386527 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:01:56.386622 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:01:56.398715 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:01:56.410031 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:01:56.410112 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:01:56.420983 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:01:56.433109 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:01:56.433192 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:01:56.447385 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:01:56.460977 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:01:56.461066 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:01:56.472124 1971155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:01:56.484344 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:56.617563 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:57.344622 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:54.280536 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:56.779010 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:58.779726 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:55.714950 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:58.186438 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:57.220139 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:57.220694 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:57.220723 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:57.220656 1971420 retry.go:31] will retry after 2.261913004s: waiting for domain to come up
	I0120 14:01:59.484214 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:59.484791 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:59.484820 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:59.484718 1971420 retry.go:31] will retry after 2.630282337s: waiting for domain to come up
	I0120 14:01:57.621080 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:57.732306 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:57.856823 1971155 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:01:57.856931 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:01:58.357005 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:01:58.857625 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:01:59.358085 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:01:59.857398 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:00.357930 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:00.857134 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:01.357106 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:01.857163 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:02.357462 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:01.278692 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:03.777558 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:00.689940 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:03.185114 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:02.116624 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:02.117129 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:02:02.117163 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:02:02.117089 1971420 retry.go:31] will retry after 3.120909651s: waiting for domain to come up
	I0120 14:02:05.239389 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:05.239901 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:02:05.239953 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:02:05.239877 1971420 retry.go:31] will retry after 4.391800801s: waiting for domain to come up
	I0120 14:02:02.857734 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:03.357569 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:03.857955 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:04.357274 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:04.857819 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:05.357138 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:05.857025 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:06.357050 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:06.857399 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:07.357029 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:05.777988 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:08.278483 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:05.188225 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:07.685349 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:10.186075 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:09.634193 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.634637 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has current primary IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.634659 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) found domain IP: 192.168.72.104
	I0120 14:02:09.634684 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) reserving static IP address...
	I0120 14:02:09.635059 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) reserved static IP address 192.168.72.104 for domain default-k8s-diff-port-727256
	I0120 14:02:09.635098 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-727256", mac: "52:54:00:59:90:f7", ip: "192.168.72.104"} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.635109 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) waiting for SSH...
	I0120 14:02:09.635133 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | skip adding static IP to network mk-default-k8s-diff-port-727256 - found existing host DHCP lease matching {name: "default-k8s-diff-port-727256", mac: "52:54:00:59:90:f7", ip: "192.168.72.104"}
	I0120 14:02:09.635148 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Getting to WaitForSSH function...
	I0120 14:02:09.637199 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.637520 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.637554 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.637664 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Using SSH client type: external
	I0120 14:02:09.637695 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa (-rw-------)
	I0120 14:02:09.637761 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 14:02:09.637785 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | About to run SSH command:
	I0120 14:02:09.637834 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | exit 0
	I0120 14:02:09.763002 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | SSH cmd err, output: <nil>: 
	I0120 14:02:09.763410 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetConfigRaw
	I0120 14:02:09.764140 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetIP
	I0120 14:02:09.766862 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.767255 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.767309 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.767547 1971324 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/config.json ...
	I0120 14:02:09.767747 1971324 machine.go:93] provisionDockerMachine start ...
	I0120 14:02:09.767768 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:09.768084 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:09.770642 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.770978 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.771008 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.771159 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:09.771355 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:09.771522 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:09.771651 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:09.771843 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:09.772116 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:09.772135 1971324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 14:02:09.887277 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 14:02:09.887306 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetMachineName
	I0120 14:02:09.887607 1971324 buildroot.go:166] provisioning hostname "default-k8s-diff-port-727256"
	I0120 14:02:09.887644 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetMachineName
	I0120 14:02:09.887855 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:09.890533 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.890940 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.890972 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.891158 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:09.891363 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:09.891514 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:09.891625 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:09.891766 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:09.891982 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:09.891996 1971324 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-727256 && echo "default-k8s-diff-port-727256" | sudo tee /etc/hostname
	I0120 14:02:10.015326 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-727256
	
	I0120 14:02:10.015358 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:10.018488 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.018889 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.018920 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.019174 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:10.019397 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.019591 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.019775 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:10.019935 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:10.020121 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:10.020141 1971324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-727256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-727256/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-727256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 14:02:10.136552 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 14:02:10.136593 1971324 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 14:02:10.136631 1971324 buildroot.go:174] setting up certificates
	I0120 14:02:10.136653 1971324 provision.go:84] configureAuth start
	I0120 14:02:10.136667 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetMachineName
	I0120 14:02:10.137020 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetIP
	I0120 14:02:10.140046 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.140577 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.140627 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.140766 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:10.143806 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.144185 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.144220 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.144340 1971324 provision.go:143] copyHostCerts
	I0120 14:02:10.144408 1971324 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem, removing ...
	I0120 14:02:10.144433 1971324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem
	I0120 14:02:10.144518 1971324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 14:02:10.144663 1971324 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem, removing ...
	I0120 14:02:10.144675 1971324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem
	I0120 14:02:10.144716 1971324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 14:02:10.144827 1971324 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem, removing ...
	I0120 14:02:10.144838 1971324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem
	I0120 14:02:10.144865 1971324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 14:02:10.144958 1971324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-727256 san=[127.0.0.1 192.168.72.104 default-k8s-diff-port-727256 localhost minikube]
	I0120 14:02:07.857904 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:08.357419 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:08.857241 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:09.357914 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:09.857010 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:10.357118 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:10.857037 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:11.357243 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:11.857017 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:12.357401 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:10.704568 1971324 provision.go:177] copyRemoteCerts
	I0120 14:02:10.704642 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 14:02:10.704670 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:10.707581 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.707968 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.708005 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.708165 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:10.708406 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.708556 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:10.708705 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:02:10.798392 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 14:02:10.825489 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0120 14:02:10.851203 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 14:02:10.877144 1971324 provision.go:87] duration metric: took 740.469356ms to configureAuth
	I0120 14:02:10.877184 1971324 buildroot.go:189] setting minikube options for container-runtime
	I0120 14:02:10.877372 1971324 config.go:182] Loaded profile config "default-k8s-diff-port-727256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:02:10.877454 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:10.880681 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.881100 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.881135 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.881295 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:10.881487 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.881661 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.881824 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:10.881986 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:10.882152 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:10.882168 1971324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 14:02:11.118214 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 14:02:11.118246 1971324 machine.go:96] duration metric: took 1.350483814s to provisionDockerMachine
	I0120 14:02:11.118262 1971324 start.go:293] postStartSetup for "default-k8s-diff-port-727256" (driver="kvm2")
	I0120 14:02:11.118274 1971324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 14:02:11.118291 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.118662 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 14:02:11.118706 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:11.121765 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.122132 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.122160 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.122325 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:11.122539 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.122849 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:11.123019 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:02:11.205783 1971324 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 14:02:11.211240 1971324 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 14:02:11.211282 1971324 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 14:02:11.211389 1971324 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 14:02:11.211524 1971324 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem -> 19276722.pem in /etc/ssl/certs
	I0120 14:02:11.211679 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 14:02:11.222226 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:02:11.248964 1971324 start.go:296] duration metric: took 130.683064ms for postStartSetup
	I0120 14:02:11.249013 1971324 fix.go:56] duration metric: took 23.768701383s for fixHost
	I0120 14:02:11.249043 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:11.252350 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.252735 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.252784 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.253016 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:11.253244 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.253451 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.253587 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:11.253769 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:11.254003 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:11.254018 1971324 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 14:02:11.360027 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381731.321642168
	
	I0120 14:02:11.360058 1971324 fix.go:216] guest clock: 1737381731.321642168
	I0120 14:02:11.360067 1971324 fix.go:229] Guest: 2025-01-20 14:02:11.321642168 +0000 UTC Remote: 2025-01-20 14:02:11.249019145 +0000 UTC m=+40.644950772 (delta=72.623023ms)
	I0120 14:02:11.360095 1971324 fix.go:200] guest clock delta is within tolerance: 72.623023ms
	I0120 14:02:11.360110 1971324 start.go:83] releasing machines lock for "default-k8s-diff-port-727256", held for 23.8798308s
	I0120 14:02:11.360147 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.360474 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetIP
	I0120 14:02:11.363630 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.364131 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.364160 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.364441 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.365063 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.365248 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.365348 1971324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 14:02:11.365404 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:11.365419 1971324 ssh_runner.go:195] Run: cat /version.json
	I0120 14:02:11.365439 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:11.368411 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.368839 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.368879 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.368903 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.369109 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:11.369341 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.369383 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.369421 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.369557 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:11.369661 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:11.369746 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:02:11.369900 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.370094 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:11.370254 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:02:11.448584 1971324 ssh_runner.go:195] Run: systemctl --version
	I0120 14:02:11.476726 1971324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 14:02:11.630047 1971324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 14:02:11.636964 1971324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 14:02:11.637055 1971324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 14:02:11.654243 1971324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 14:02:11.654288 1971324 start.go:495] detecting cgroup driver to use...
	I0120 14:02:11.654363 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 14:02:11.671320 1971324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 14:02:11.687866 1971324 docker.go:217] disabling cri-docker service (if available) ...
	I0120 14:02:11.687931 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 14:02:11.703932 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 14:02:11.718827 1971324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 14:02:11.847210 1971324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 14:02:12.007623 1971324 docker.go:233] disabling docker service ...
	I0120 14:02:12.007698 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 14:02:12.024946 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 14:02:12.039357 1971324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 14:02:12.198785 1971324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 14:02:12.318653 1971324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 14:02:12.335226 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 14:02:12.356118 1971324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 14:02:12.356185 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.368853 1971324 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 14:02:12.368928 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.382590 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.395155 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.407707 1971324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 14:02:12.420260 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.432650 1971324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.451911 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.463708 1971324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 14:02:12.474047 1971324 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 14:02:12.474171 1971324 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 14:02:12.487873 1971324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 14:02:12.498585 1971324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:02:12.613685 1971324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 14:02:12.729768 1971324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 14:02:12.729875 1971324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 14:02:12.734978 1971324 start.go:563] Will wait 60s for crictl version
	I0120 14:02:12.735064 1971324 ssh_runner.go:195] Run: which crictl
	I0120 14:02:12.739280 1971324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 14:02:12.786678 1971324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 14:02:12.786793 1971324 ssh_runner.go:195] Run: crio --version
	I0120 14:02:12.817307 1971324 ssh_runner.go:195] Run: crio --version
	I0120 14:02:12.852593 1971324 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 14:02:10.778869 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:12.782521 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:12.186380 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:14.187082 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:12.853765 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetIP
	I0120 14:02:12.856623 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:12.857007 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:12.857053 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:12.857241 1971324 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 14:02:12.861728 1971324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:02:12.877000 1971324 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-727256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-727
256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 14:02:12.877127 1971324 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 14:02:12.877169 1971324 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:02:12.929986 1971324 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 14:02:12.930071 1971324 ssh_runner.go:195] Run: which lz4
	I0120 14:02:12.934799 1971324 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 14:02:12.939259 1971324 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 14:02:12.939291 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 14:02:15.168447 1971324 crio.go:462] duration metric: took 2.233676027s to copy over tarball
	I0120 14:02:15.168587 1971324 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 14:02:12.857737 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:13.358043 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:13.857191 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:14.357168 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:14.857760 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:15.357900 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:15.857889 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:16.357039 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:16.857812 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:17.358144 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:15.279029 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:17.281259 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:16.687293 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:18.717798 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:17.552550 1971324 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.383920665s)
	I0120 14:02:17.552588 1971324 crio.go:469] duration metric: took 2.38410161s to extract the tarball
	I0120 14:02:17.552598 1971324 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 14:02:17.595819 1971324 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:02:17.649094 1971324 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 14:02:17.649124 1971324 cache_images.go:84] Images are preloaded, skipping loading
	I0120 14:02:17.649135 1971324 kubeadm.go:934] updating node { 192.168.72.104 8444 v1.32.0 crio true true} ...
	I0120 14:02:17.649302 1971324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-727256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-727256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 14:02:17.649381 1971324 ssh_runner.go:195] Run: crio config
	I0120 14:02:17.704561 1971324 cni.go:84] Creating CNI manager for ""
	I0120 14:02:17.704586 1971324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:02:17.704598 1971324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 14:02:17.704619 1971324 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.104 APIServerPort:8444 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-727256 NodeName:default-k8s-diff-port-727256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 14:02:17.704750 1971324 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.104
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-727256"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.104"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.104"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 14:02:17.704816 1971324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 14:02:17.716061 1971324 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 14:02:17.716155 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 14:02:17.727801 1971324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0120 14:02:17.748166 1971324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 14:02:17.766985 1971324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0120 14:02:17.787650 1971324 ssh_runner.go:195] Run: grep 192.168.72.104	control-plane.minikube.internal$ /etc/hosts
	I0120 14:02:17.791993 1971324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:02:17.808216 1971324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:02:17.961542 1971324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:02:17.984203 1971324 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256 for IP: 192.168.72.104
	I0120 14:02:17.984233 1971324 certs.go:194] generating shared ca certs ...
	I0120 14:02:17.984291 1971324 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:02:17.984557 1971324 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 14:02:17.984648 1971324 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 14:02:17.984666 1971324 certs.go:256] generating profile certs ...
	I0120 14:02:17.984792 1971324 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.key
	I0120 14:02:17.984852 1971324 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/apiserver.key.23647750
	I0120 14:02:17.984912 1971324 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/proxy-client.key
	I0120 14:02:17.985077 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem (1338 bytes)
	W0120 14:02:17.985121 1971324 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672_empty.pem, impossibly tiny 0 bytes
	I0120 14:02:17.985133 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 14:02:17.985155 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 14:02:17.985178 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 14:02:17.985198 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 14:02:17.985256 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:02:17.985878 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 14:02:18.048719 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 14:02:18.112171 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 14:02:18.145094 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 14:02:18.177563 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0120 14:02:18.207741 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 14:02:18.238193 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 14:02:18.267493 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 14:02:18.299204 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 14:02:18.326722 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem --> /usr/share/ca-certificates/1927672.pem (1338 bytes)
	I0120 14:02:18.354365 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /usr/share/ca-certificates/19276722.pem (1708 bytes)
	I0120 14:02:18.387004 1971324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 14:02:18.407331 1971324 ssh_runner.go:195] Run: openssl version
	I0120 14:02:18.414499 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 14:02:18.428237 1971324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:02:18.433437 1971324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:02:18.433525 1971324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:02:18.440279 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 14:02:18.453372 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1927672.pem && ln -fs /usr/share/ca-certificates/1927672.pem /etc/ssl/certs/1927672.pem"
	I0120 14:02:18.466685 1971324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1927672.pem
	I0120 14:02:18.472158 1971324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:58 /usr/share/ca-certificates/1927672.pem
	I0120 14:02:18.472221 1971324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1927672.pem
	I0120 14:02:18.479048 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1927672.pem /etc/ssl/certs/51391683.0"
	I0120 14:02:18.492239 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19276722.pem && ln -fs /usr/share/ca-certificates/19276722.pem /etc/ssl/certs/19276722.pem"
	I0120 14:02:18.505538 1971324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19276722.pem
	I0120 14:02:18.511360 1971324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:58 /usr/share/ca-certificates/19276722.pem
	I0120 14:02:18.511449 1971324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19276722.pem
	I0120 14:02:18.518290 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19276722.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 14:02:18.531250 1971324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 14:02:18.536241 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 14:02:18.543115 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 14:02:18.549735 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 14:02:18.556016 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 14:02:18.563051 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 14:02:18.569460 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 14:02:18.576252 1971324 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-727256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-727256
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:02:18.576356 1971324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 14:02:18.576422 1971324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:02:18.620494 1971324 cri.go:89] found id: ""
	I0120 14:02:18.620569 1971324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 14:02:18.631697 1971324 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 14:02:18.631720 1971324 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 14:02:18.631768 1971324 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 14:02:18.642156 1971324 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 14:02:18.643051 1971324 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-727256" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:02:18.643528 1971324 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-1920423/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-727256" cluster setting kubeconfig missing "default-k8s-diff-port-727256" context setting]
	I0120 14:02:18.644170 1971324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:02:18.668914 1971324 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 14:02:18.683072 1971324 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.104
	I0120 14:02:18.683114 1971324 kubeadm.go:1160] stopping kube-system containers ...
	I0120 14:02:18.683129 1971324 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 14:02:18.683183 1971324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:02:18.729285 1971324 cri.go:89] found id: ""
	I0120 14:02:18.729378 1971324 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 14:02:18.747615 1971324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:02:18.760814 1971324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:02:18.760838 1971324 kubeadm.go:157] found existing configuration files:
	
	I0120 14:02:18.760894 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0120 14:02:18.770641 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:02:18.770724 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:02:18.781179 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0120 14:02:18.792949 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:02:18.793028 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:02:18.804366 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0120 14:02:18.815263 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:02:18.815346 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:02:18.825942 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0120 14:02:18.835903 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:02:18.835982 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:02:18.845972 1971324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:02:18.859961 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:19.003738 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:19.608160 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:19.849647 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:19.912750 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:20.009660 1971324 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:02:20.009754 1971324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:20.510534 1971324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:17.857538 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:18.357133 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:18.857266 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:19.357682 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:19.857168 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:20.357018 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:20.857784 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:21.357312 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:21.857374 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:22.357052 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:19.469918 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:21.779262 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:21.010159 1971324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:21.032056 1971324 api_server.go:72] duration metric: took 1.022395241s to wait for apiserver process to appear ...
	I0120 14:02:21.032096 1971324 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:02:21.032131 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:21.032697 1971324 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": dial tcp 192.168.72.104:8444: connect: connection refused
	I0120 14:02:21.532363 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:23.847330 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 14:02:23.847369 1971324 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 14:02:23.847385 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:23.877401 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 14:02:23.877441 1971324 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 14:02:24.032826 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:24.039566 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:02:24.039598 1971324 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:02:24.532837 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:24.539028 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:02:24.539067 1971324 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:02:25.032465 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:25.039986 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0120 14:02:25.049377 1971324 api_server.go:141] control plane version: v1.32.0
	I0120 14:02:25.049420 1971324 api_server.go:131] duration metric: took 4.017316014s to wait for apiserver health ...
	I0120 14:02:25.049433 1971324 cni.go:84] Creating CNI manager for ""
	I0120 14:02:25.049442 1971324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:02:25.051482 1971324 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:02:21.185126 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:23.186698 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:25.052855 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:02:25.066022 1971324 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:02:25.095180 1971324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:02:25.114905 1971324 system_pods.go:59] 8 kube-system pods found
	I0120 14:02:25.114960 1971324 system_pods.go:61] "coredns-668d6bf9bc-bz5qj" [d7374913-ed7c-42dc-a94f-44e1e2c757a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 14:02:25.114976 1971324 system_pods.go:61] "etcd-default-k8s-diff-port-727256" [1b7d5ec9-7630-4785-9c45-41ecdb748a8a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 14:02:25.114986 1971324 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-727256" [41957bec-6146-4451-a58e-80cfc0954d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 14:02:25.115001 1971324 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-727256" [700634af-068c-43a9-93fd-cb10680f5547] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0120 14:02:25.115015 1971324 system_pods.go:61] "kube-proxy-q48xh" [714b43b5-29d9-4ffb-a571-d319ac71ea64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0120 14:02:25.115023 1971324 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-727256" [37e3619f-2d6c-4ffd-a8a2-e9e935b79342] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0120 14:02:25.115037 1971324 system_pods.go:61] "metrics-server-f79f97bbb-wgptn" [c1255c51-78a3-4f21-a054-b7eec52e8021] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:02:25.115045 1971324 system_pods.go:61] "storage-provisioner" [f116e0d4-4c99-46b2-bb50-448d19e948da] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 14:02:25.115063 1971324 system_pods.go:74] duration metric: took 19.845736ms to wait for pod list to return data ...
	I0120 14:02:25.115078 1971324 node_conditions.go:102] verifying NodePressure condition ...
	I0120 14:02:25.140084 1971324 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 14:02:25.140127 1971324 node_conditions.go:123] node cpu capacity is 2
	I0120 14:02:25.140143 1971324 node_conditions.go:105] duration metric: took 25.059269ms to run NodePressure ...
	I0120 14:02:25.140170 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:25.471605 1971324 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0120 14:02:25.475871 1971324 kubeadm.go:739] kubelet initialised
	I0120 14:02:25.475897 1971324 kubeadm.go:740] duration metric: took 4.262299ms waiting for restarted kubelet to initialise ...
	I0120 14:02:25.475907 1971324 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:02:25.481730 1971324 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:22.857953 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:23.357118 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:23.857846 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:24.357974 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:24.858083 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:25.357532 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:25.857724 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:26.357640 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:26.857695 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:27.357848 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:24.279782 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:26.777640 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:28.778330 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:25.686765 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:28.186774 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:27.488205 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:29.990080 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:27.857637 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:28.357980 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:28.857073 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:29.357768 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:29.857689 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:30.358021 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:30.857725 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:31.357087 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:31.857093 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:32.358124 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:30.783033 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:33.279302 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:30.685246 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:33.195660 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:31.992749 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:34.489038 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:32.857233 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:33.357972 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:33.857268 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:34.357580 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:34.857317 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:35.357391 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:35.858044 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:36.357666 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:36.857501 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:37.357800 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:35.282839 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:37.778057 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:35.685341 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:37.685683 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:40.185648 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:35.989736 1971324 pod_ready.go:93] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:35.989764 1971324 pod_ready.go:82] duration metric: took 10.507995257s for pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:35.989775 1971324 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:35.994950 1971324 pod_ready.go:93] pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:35.994974 1971324 pod_ready.go:82] duration metric: took 5.193222ms for pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:35.994984 1971324 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:38.002261 1971324 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:39.002130 1971324 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:39.002163 1971324 pod_ready.go:82] duration metric: took 3.007172332s for pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.002175 1971324 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.007066 1971324 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:39.007092 1971324 pod_ready.go:82] duration metric: took 4.909894ms for pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.007102 1971324 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-q48xh" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.011300 1971324 pod_ready.go:93] pod "kube-proxy-q48xh" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:39.011327 1971324 pod_ready.go:82] duration metric: took 4.217903ms for pod "kube-proxy-q48xh" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.011339 1971324 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.019267 1971324 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:39.019290 1971324 pod_ready.go:82] duration metric: took 7.94282ms for pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.019299 1971324 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:37.857302 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:38.357923 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:38.857475 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:39.357375 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:39.857802 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:40.357852 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:40.857000 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:41.357100 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:41.857256 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:42.357310 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:39.778127 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:41.778931 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:42.185876 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:44.685996 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:41.026382 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:43.026822 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:45.526641 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:42.857156 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:43.357487 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:43.857399 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:44.357134 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:44.857807 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:45.358043 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:45.857787 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:46.357476 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:46.857480 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:47.357059 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:44.284374 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:46.778063 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:46.686210 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:49.185352 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:48.025036 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:50.027377 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:47.857389 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:48.357917 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:48.857908 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:49.357865 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:49.857103 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:50.357844 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:50.856981 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:51.357722 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:51.857389 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:52.357276 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:49.277771 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:51.280318 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:53.778876 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:51.685546 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:53.685814 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:52.526770 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:55.026492 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:52.857418 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:53.357813 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:53.857620 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:54.357209 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:54.857914 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:55.357510 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:55.857571 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:56.357067 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:56.857492 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:57.357062 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:55.783020 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:58.280672 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:55.686206 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:58.186818 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:57.026925 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:59.525553 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:57.857477 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:02:57.857614 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:02:57.905881 1971155 cri.go:89] found id: ""
	I0120 14:02:57.905912 1971155 logs.go:282] 0 containers: []
	W0120 14:02:57.905922 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:02:57.905929 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:02:57.905992 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:02:57.943622 1971155 cri.go:89] found id: ""
	I0120 14:02:57.943651 1971155 logs.go:282] 0 containers: []
	W0120 14:02:57.943661 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:02:57.943667 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:02:57.943723 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:02:57.988526 1971155 cri.go:89] found id: ""
	I0120 14:02:57.988562 1971155 logs.go:282] 0 containers: []
	W0120 14:02:57.988574 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:02:57.988583 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:02:57.988651 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:02:58.031485 1971155 cri.go:89] found id: ""
	I0120 14:02:58.031521 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.031534 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:02:58.031543 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:02:58.031610 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:02:58.068567 1971155 cri.go:89] found id: ""
	I0120 14:02:58.068598 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.068607 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:02:58.068613 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:02:58.068671 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:02:58.111132 1971155 cri.go:89] found id: ""
	I0120 14:02:58.111163 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.111172 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:02:58.111179 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:02:58.111249 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:02:58.148303 1971155 cri.go:89] found id: ""
	I0120 14:02:58.148347 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.148360 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:02:58.148369 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:02:58.148451 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:02:58.185950 1971155 cri.go:89] found id: ""
	I0120 14:02:58.185999 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.186012 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:02:58.186045 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:02:58.186067 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:02:58.240918 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:02:58.240967 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:02:58.257093 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:02:58.257146 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:02:58.414616 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:02:58.414647 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:02:58.414668 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:02:58.492488 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:02:58.492552 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:01.040468 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:01.055229 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:01.055334 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:01.096466 1971155 cri.go:89] found id: ""
	I0120 14:03:01.096504 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.096517 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:01.096527 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:01.096598 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:01.134935 1971155 cri.go:89] found id: ""
	I0120 14:03:01.134970 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.134981 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:01.134991 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:01.135067 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:01.173227 1971155 cri.go:89] found id: ""
	I0120 14:03:01.173260 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.173270 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:01.173276 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:01.173330 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:01.214239 1971155 cri.go:89] found id: ""
	I0120 14:03:01.214284 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.214295 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:01.214305 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:01.214371 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:01.256599 1971155 cri.go:89] found id: ""
	I0120 14:03:01.256637 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.256650 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:01.256659 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:01.256739 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:01.296996 1971155 cri.go:89] found id: ""
	I0120 14:03:01.297032 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.297061 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:01.297070 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:01.297138 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:01.332783 1971155 cri.go:89] found id: ""
	I0120 14:03:01.332823 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.332834 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:01.332843 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:01.332918 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:01.369365 1971155 cri.go:89] found id: ""
	I0120 14:03:01.369406 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.369421 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:01.369434 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:01.369451 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:01.414439 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:01.414480 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:01.471195 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:01.471246 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:01.486430 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:01.486462 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:01.574798 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:01.574828 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:01.574845 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:00.778133 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:02.778231 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:00.685031 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:03.185220 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:01.527499 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:04.025999 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:04.171235 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:04.188065 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:04.188156 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:04.228357 1971155 cri.go:89] found id: ""
	I0120 14:03:04.228387 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.228400 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:04.228409 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:04.228467 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:04.267565 1971155 cri.go:89] found id: ""
	I0120 14:03:04.267610 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.267624 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:04.267635 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:04.267711 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:04.307392 1971155 cri.go:89] found id: ""
	I0120 14:03:04.307425 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.307434 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:04.307440 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:04.307508 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:04.349729 1971155 cri.go:89] found id: ""
	I0120 14:03:04.349767 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.349778 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:04.349786 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:04.349870 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:04.387475 1971155 cri.go:89] found id: ""
	I0120 14:03:04.387501 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.387509 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:04.387516 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:04.387572 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:04.427468 1971155 cri.go:89] found id: ""
	I0120 14:03:04.427509 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.427530 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:04.427539 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:04.427612 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:04.466639 1971155 cri.go:89] found id: ""
	I0120 14:03:04.466670 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.466679 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:04.466686 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:04.466741 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:04.504757 1971155 cri.go:89] found id: ""
	I0120 14:03:04.504787 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.504795 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:04.504806 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:04.504818 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:04.557733 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:04.557779 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:04.573354 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:04.573387 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:04.650417 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:04.650446 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:04.650463 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:04.733072 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:04.733120 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:07.274982 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:07.290100 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:07.290193 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:07.332977 1971155 cri.go:89] found id: ""
	I0120 14:03:07.333017 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.333029 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:07.333038 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:07.333115 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:07.372892 1971155 cri.go:89] found id: ""
	I0120 14:03:07.372933 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.372945 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:07.372954 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:07.373026 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:07.425530 1971155 cri.go:89] found id: ""
	I0120 14:03:07.425577 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.425590 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:07.425600 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:07.425662 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:04.778368 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:06.778647 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:05.684845 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:07.685532 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:06.026498 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:08.526091 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:10.526915 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:07.476155 1971155 cri.go:89] found id: ""
	I0120 14:03:07.476184 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.476193 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:07.476199 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:07.476254 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:07.521877 1971155 cri.go:89] found id: ""
	I0120 14:03:07.521914 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.521926 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:07.521939 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:07.522011 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:07.560355 1971155 cri.go:89] found id: ""
	I0120 14:03:07.560395 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.560409 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:07.560418 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:07.560487 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:07.600264 1971155 cri.go:89] found id: ""
	I0120 14:03:07.600300 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.600312 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:07.600320 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:07.600394 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:07.638852 1971155 cri.go:89] found id: ""
	I0120 14:03:07.638882 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.638891 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:07.638904 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:07.638921 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:07.697341 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:07.697388 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:07.712419 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:07.712453 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:07.790196 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:07.790219 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:07.790236 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:07.865638 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:07.865691 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:10.411816 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:10.425923 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:10.425995 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:10.469227 1971155 cri.go:89] found id: ""
	I0120 14:03:10.469260 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.469271 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:10.469279 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:10.469335 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:10.507955 1971155 cri.go:89] found id: ""
	I0120 14:03:10.507982 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.507991 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:10.507997 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:10.508064 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:10.543101 1971155 cri.go:89] found id: ""
	I0120 14:03:10.543127 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.543135 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:10.543141 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:10.543211 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:10.585664 1971155 cri.go:89] found id: ""
	I0120 14:03:10.585707 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.585722 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:10.585731 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:10.585798 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:10.623476 1971155 cri.go:89] found id: ""
	I0120 14:03:10.623509 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.623519 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:10.623526 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:10.623696 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:10.660175 1971155 cri.go:89] found id: ""
	I0120 14:03:10.660212 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.660236 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:10.660243 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:10.660328 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:10.701559 1971155 cri.go:89] found id: ""
	I0120 14:03:10.701587 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.701595 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:10.701601 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:10.701660 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:10.745904 1971155 cri.go:89] found id: ""
	I0120 14:03:10.745934 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.745946 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:10.745960 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:10.745977 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:10.797159 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:10.797195 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:10.811080 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:10.811114 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:10.892397 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:10.892453 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:10.892474 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:10.974483 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:10.974548 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:09.277769 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:11.279861 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:13.778783 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:10.188443 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:12.684802 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:14.685044 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:13.026831 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:15.028964 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:13.520017 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:13.534970 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:13.535057 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:13.572408 1971155 cri.go:89] found id: ""
	I0120 14:03:13.572447 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.572460 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:13.572469 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:13.572551 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:13.611551 1971155 cri.go:89] found id: ""
	I0120 14:03:13.611584 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.611594 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:13.611602 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:13.611679 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:13.648597 1971155 cri.go:89] found id: ""
	I0120 14:03:13.648643 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.648659 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:13.648669 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:13.648746 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:13.688240 1971155 cri.go:89] found id: ""
	I0120 14:03:13.688273 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.688284 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:13.688292 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:13.688359 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:13.726824 1971155 cri.go:89] found id: ""
	I0120 14:03:13.726858 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.726870 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:13.726879 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:13.726960 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:13.763355 1971155 cri.go:89] found id: ""
	I0120 14:03:13.763393 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.763406 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:13.763426 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:13.763520 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:13.805672 1971155 cri.go:89] found id: ""
	I0120 14:03:13.805709 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.805721 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:13.805729 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:13.805808 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:13.843604 1971155 cri.go:89] found id: ""
	I0120 14:03:13.843639 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.843647 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:13.843658 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:13.843677 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:13.900719 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:13.900769 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:13.917734 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:13.917769 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:13.989979 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:13.990004 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:13.990023 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:14.065519 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:14.065568 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:16.608887 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:16.624966 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:16.625095 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:16.663250 1971155 cri.go:89] found id: ""
	I0120 14:03:16.663286 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.663299 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:16.663309 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:16.663381 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:16.705075 1971155 cri.go:89] found id: ""
	I0120 14:03:16.705109 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.705121 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:16.705129 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:16.705203 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:16.743136 1971155 cri.go:89] found id: ""
	I0120 14:03:16.743172 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.743183 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:16.743196 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:16.743259 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:16.781721 1971155 cri.go:89] found id: ""
	I0120 14:03:16.781749 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.781759 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:16.781768 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:16.781838 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:16.819156 1971155 cri.go:89] found id: ""
	I0120 14:03:16.819186 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.819195 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:16.819201 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:16.819267 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:16.857239 1971155 cri.go:89] found id: ""
	I0120 14:03:16.857271 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.857282 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:16.857291 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:16.857366 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:16.896447 1971155 cri.go:89] found id: ""
	I0120 14:03:16.896484 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.896494 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:16.896500 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:16.896573 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:16.933838 1971155 cri.go:89] found id: ""
	I0120 14:03:16.933868 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.933884 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:16.933895 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:16.933912 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:16.947603 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:16.947641 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:17.030769 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:17.030797 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:17.030817 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:17.113685 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:17.113733 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:17.156727 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:17.156762 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:16.279194 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:18.279451 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:16.686668 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:19.185833 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:17.525194 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:19.526034 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:19.718569 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:19.732512 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:19.732591 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:19.767932 1971155 cri.go:89] found id: ""
	I0120 14:03:19.767967 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.767978 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:19.767986 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:19.768060 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:19.803810 1971155 cri.go:89] found id: ""
	I0120 14:03:19.803849 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.803862 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:19.803870 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:19.803939 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:19.843834 1971155 cri.go:89] found id: ""
	I0120 14:03:19.843862 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.843873 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:19.843886 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:19.843958 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:19.881732 1971155 cri.go:89] found id: ""
	I0120 14:03:19.881763 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.881774 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:19.881781 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:19.881848 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:19.924381 1971155 cri.go:89] found id: ""
	I0120 14:03:19.924417 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.924428 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:19.924437 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:19.924502 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:19.970958 1971155 cri.go:89] found id: ""
	I0120 14:03:19.970987 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.970996 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:19.971004 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:19.971065 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:20.012745 1971155 cri.go:89] found id: ""
	I0120 14:03:20.012781 1971155 logs.go:282] 0 containers: []
	W0120 14:03:20.012792 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:20.012800 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:20.012874 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:20.051390 1971155 cri.go:89] found id: ""
	I0120 14:03:20.051440 1971155 logs.go:282] 0 containers: []
	W0120 14:03:20.051458 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:20.051472 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:20.051496 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:20.110400 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:20.110442 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:20.127460 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:20.127494 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:20.204395 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:20.204421 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:20.204438 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:20.285467 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:20.285512 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:20.281009 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:22.778157 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:21.685011 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:24.185145 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:21.527945 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:24.028130 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:22.839418 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:22.853700 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:22.853779 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:22.889955 1971155 cri.go:89] found id: ""
	I0120 14:03:22.889984 1971155 logs.go:282] 0 containers: []
	W0120 14:03:22.889992 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:22.889998 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:22.890051 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:22.927006 1971155 cri.go:89] found id: ""
	I0120 14:03:22.927035 1971155 logs.go:282] 0 containers: []
	W0120 14:03:22.927044 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:22.927050 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:22.927114 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:22.964259 1971155 cri.go:89] found id: ""
	I0120 14:03:22.964295 1971155 logs.go:282] 0 containers: []
	W0120 14:03:22.964321 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:22.964330 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:22.964394 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:23.002226 1971155 cri.go:89] found id: ""
	I0120 14:03:23.002259 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.002268 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:23.002274 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:23.002331 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:23.039583 1971155 cri.go:89] found id: ""
	I0120 14:03:23.039620 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.039633 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:23.039641 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:23.039722 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:23.078733 1971155 cri.go:89] found id: ""
	I0120 14:03:23.078761 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.078770 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:23.078803 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:23.078878 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:23.114333 1971155 cri.go:89] found id: ""
	I0120 14:03:23.114390 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.114403 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:23.114411 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:23.114485 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:23.150761 1971155 cri.go:89] found id: ""
	I0120 14:03:23.150797 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.150809 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:23.150824 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:23.150839 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:23.213320 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:23.213384 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:23.228681 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:23.228717 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:23.301816 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:23.301842 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:23.301858 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:23.387061 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:23.387117 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:25.931823 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:25.945038 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:25.945134 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:25.981262 1971155 cri.go:89] found id: ""
	I0120 14:03:25.981315 1971155 logs.go:282] 0 containers: []
	W0120 14:03:25.981330 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:25.981340 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:25.981420 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:26.018945 1971155 cri.go:89] found id: ""
	I0120 14:03:26.018980 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.018993 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:26.019001 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:26.019080 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:26.060446 1971155 cri.go:89] found id: ""
	I0120 14:03:26.060477 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.060487 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:26.060496 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:26.060563 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:26.097720 1971155 cri.go:89] found id: ""
	I0120 14:03:26.097761 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.097782 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:26.097792 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:26.097861 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:26.133561 1971155 cri.go:89] found id: ""
	I0120 14:03:26.133593 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.133605 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:26.133614 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:26.133701 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:26.175091 1971155 cri.go:89] found id: ""
	I0120 14:03:26.175124 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.175136 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:26.175144 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:26.175206 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:26.214747 1971155 cri.go:89] found id: ""
	I0120 14:03:26.214779 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.214788 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:26.214794 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:26.214864 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:26.264211 1971155 cri.go:89] found id: ""
	I0120 14:03:26.264244 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.264255 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:26.264269 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:26.264291 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:26.282025 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:26.282062 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:26.359793 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:26.359820 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:26.359842 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:26.447177 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:26.447224 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:26.487488 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:26.487523 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:25.279187 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:27.282700 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:26.186599 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:28.684816 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:26.527177 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:29.026067 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:29.039824 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:29.054535 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:29.054619 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:29.096202 1971155 cri.go:89] found id: ""
	I0120 14:03:29.096233 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.096245 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:29.096254 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:29.096316 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:29.139442 1971155 cri.go:89] found id: ""
	I0120 14:03:29.139475 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.139485 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:29.139492 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:29.139565 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:29.181278 1971155 cri.go:89] found id: ""
	I0120 14:03:29.181320 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.181334 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:29.181343 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:29.181424 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:29.222018 1971155 cri.go:89] found id: ""
	I0120 14:03:29.222049 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.222058 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:29.222072 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:29.222129 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:29.263028 1971155 cri.go:89] found id: ""
	I0120 14:03:29.263071 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.263083 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:29.263092 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:29.263167 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:29.307933 1971155 cri.go:89] found id: ""
	I0120 14:03:29.307965 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.307973 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:29.307980 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:29.308040 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:29.344204 1971155 cri.go:89] found id: ""
	I0120 14:03:29.344237 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.344250 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:29.344258 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:29.344327 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:29.381577 1971155 cri.go:89] found id: ""
	I0120 14:03:29.381604 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.381613 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:29.381623 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:29.381636 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:29.396553 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:29.396592 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:29.476381 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:29.476406 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:29.476420 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:29.552542 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:29.552586 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:29.597585 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:29.597619 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:32.150749 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:32.166160 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:32.166240 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:32.209621 1971155 cri.go:89] found id: ""
	I0120 14:03:32.209657 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.209671 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:32.209682 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:32.209764 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:32.250347 1971155 cri.go:89] found id: ""
	I0120 14:03:32.250386 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.250397 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:32.250405 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:32.250477 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:32.291555 1971155 cri.go:89] found id: ""
	I0120 14:03:32.291587 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.291599 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:32.291607 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:32.291677 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:32.329975 1971155 cri.go:89] found id: ""
	I0120 14:03:32.330015 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.330023 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:32.330030 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:32.330107 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:32.371131 1971155 cri.go:89] found id: ""
	I0120 14:03:32.371170 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.371190 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:32.371199 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:32.371273 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:32.409613 1971155 cri.go:89] found id: ""
	I0120 14:03:32.409653 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.409665 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:32.409672 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:32.409732 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:29.778719 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:32.279358 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:30.686778 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:33.184968 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:35.185398 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:31.026580 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:33.028333 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:35.527445 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:32.448898 1971155 cri.go:89] found id: ""
	I0120 14:03:32.448932 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.448944 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:32.448953 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:32.449029 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:32.486258 1971155 cri.go:89] found id: ""
	I0120 14:03:32.486295 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.486308 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:32.486323 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:32.486340 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:32.538196 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:32.538238 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:32.553140 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:32.553173 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:32.640124 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:32.640147 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:32.640161 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:32.725556 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:32.725615 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:35.276962 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:35.292662 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:35.292754 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:35.332066 1971155 cri.go:89] found id: ""
	I0120 14:03:35.332099 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.332111 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:35.332119 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:35.332188 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:35.369977 1971155 cri.go:89] found id: ""
	I0120 14:03:35.370010 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.370024 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:35.370030 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:35.370099 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:35.412630 1971155 cri.go:89] found id: ""
	I0120 14:03:35.412663 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.412672 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:35.412680 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:35.412746 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:35.450785 1971155 cri.go:89] found id: ""
	I0120 14:03:35.450819 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.450830 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:35.450838 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:35.450908 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:35.496877 1971155 cri.go:89] found id: ""
	I0120 14:03:35.496930 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.496943 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:35.496950 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:35.497021 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:35.538626 1971155 cri.go:89] found id: ""
	I0120 14:03:35.538662 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.538675 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:35.538684 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:35.538768 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:35.579144 1971155 cri.go:89] found id: ""
	I0120 14:03:35.579181 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.579195 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:35.579204 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:35.579283 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:35.623935 1971155 cri.go:89] found id: ""
	I0120 14:03:35.623985 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.623997 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:35.624038 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:35.624074 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:35.664682 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:35.664716 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:35.722441 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:35.722505 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:35.752215 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:35.752246 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:35.843666 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:35.843692 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:35.843706 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:34.778378 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:36.778557 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:37.685015 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:39.689385 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:38.026699 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:40.526689 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:38.427318 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:38.441690 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:38.441767 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:38.481605 1971155 cri.go:89] found id: ""
	I0120 14:03:38.481636 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.481648 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:38.481655 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:38.481726 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:38.518378 1971155 cri.go:89] found id: ""
	I0120 14:03:38.518415 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.518427 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:38.518436 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:38.518512 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:38.561625 1971155 cri.go:89] found id: ""
	I0120 14:03:38.561674 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.561687 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:38.561696 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:38.561764 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:38.603557 1971155 cri.go:89] found id: ""
	I0120 14:03:38.603585 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.603593 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:38.603600 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:38.603671 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:38.644242 1971155 cri.go:89] found id: ""
	I0120 14:03:38.644276 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.644289 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:38.644298 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:38.644364 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:38.686124 1971155 cri.go:89] found id: ""
	I0120 14:03:38.686154 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.686166 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:38.686175 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:38.686257 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:38.731861 1971155 cri.go:89] found id: ""
	I0120 14:03:38.731896 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.731906 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:38.731915 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:38.732002 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:38.773494 1971155 cri.go:89] found id: ""
	I0120 14:03:38.773522 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.773533 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:38.773579 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:38.773602 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:38.827125 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:38.827168 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:38.841903 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:38.841939 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:38.928392 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:38.928423 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:38.928440 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:39.008227 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:39.008270 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:41.554775 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:41.568912 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:41.568983 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:41.616455 1971155 cri.go:89] found id: ""
	I0120 14:03:41.616483 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.616491 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:41.616505 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:41.616584 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:41.654958 1971155 cri.go:89] found id: ""
	I0120 14:03:41.654995 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.655007 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:41.655014 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:41.655091 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:41.695758 1971155 cri.go:89] found id: ""
	I0120 14:03:41.695800 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.695814 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:41.695824 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:41.695901 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:41.733782 1971155 cri.go:89] found id: ""
	I0120 14:03:41.733815 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.733826 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:41.733834 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:41.733906 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:41.771097 1971155 cri.go:89] found id: ""
	I0120 14:03:41.771129 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.771141 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:41.771150 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:41.771266 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:41.808590 1971155 cri.go:89] found id: ""
	I0120 14:03:41.808629 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.808643 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:41.808652 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:41.808733 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:41.848943 1971155 cri.go:89] found id: ""
	I0120 14:03:41.848971 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.848982 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:41.848990 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:41.849057 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:41.886267 1971155 cri.go:89] found id: ""
	I0120 14:03:41.886302 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.886315 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:41.886328 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:41.886354 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:41.903471 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:41.903519 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:41.980320 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:41.980342 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:41.980358 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:42.060823 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:42.060868 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:42.102476 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:42.102511 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:39.278753 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:41.778436 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:42.189707 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:44.686641 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:43.026630 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:45.526315 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:44.677081 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:44.691997 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:44.692094 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:44.732561 1971155 cri.go:89] found id: ""
	I0120 14:03:44.732599 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.732611 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:44.732620 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:44.732701 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:44.774215 1971155 cri.go:89] found id: ""
	I0120 14:03:44.774250 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.774259 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:44.774266 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:44.774330 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:44.815997 1971155 cri.go:89] found id: ""
	I0120 14:03:44.816031 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.816040 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:44.816046 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:44.816109 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:44.853946 1971155 cri.go:89] found id: ""
	I0120 14:03:44.853984 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.853996 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:44.854004 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:44.854070 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:44.896969 1971155 cri.go:89] found id: ""
	I0120 14:03:44.897006 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.897018 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:44.897028 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:44.897120 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:44.942458 1971155 cri.go:89] found id: ""
	I0120 14:03:44.942496 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.942508 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:44.942518 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:44.942648 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:44.984028 1971155 cri.go:89] found id: ""
	I0120 14:03:44.984059 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.984084 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:44.984094 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:44.984173 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:45.026096 1971155 cri.go:89] found id: ""
	I0120 14:03:45.026130 1971155 logs.go:282] 0 containers: []
	W0120 14:03:45.026141 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:45.026153 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:45.026169 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:45.110471 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:45.110527 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:45.154855 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:45.154892 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:45.214465 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:45.214511 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:45.232020 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:45.232054 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:45.312932 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:44.278244 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:46.777269 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:48.777901 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:47.184802 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:49.184874 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:47.526520 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:50.026151 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:47.813923 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:47.828326 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:47.828422 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:47.865843 1971155 cri.go:89] found id: ""
	I0120 14:03:47.865875 1971155 logs.go:282] 0 containers: []
	W0120 14:03:47.865884 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:47.865891 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:47.865952 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:47.913554 1971155 cri.go:89] found id: ""
	I0120 14:03:47.913582 1971155 logs.go:282] 0 containers: []
	W0120 14:03:47.913590 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:47.913597 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:47.913655 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:47.970084 1971155 cri.go:89] found id: ""
	I0120 14:03:47.970115 1971155 logs.go:282] 0 containers: []
	W0120 14:03:47.970135 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:47.970144 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:47.970205 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:48.016631 1971155 cri.go:89] found id: ""
	I0120 14:03:48.016737 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.016750 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:48.016758 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:48.016833 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:48.073208 1971155 cri.go:89] found id: ""
	I0120 14:03:48.073253 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.073266 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:48.073276 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:48.073387 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:48.111638 1971155 cri.go:89] found id: ""
	I0120 14:03:48.111680 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.111692 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:48.111701 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:48.111783 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:48.155605 1971155 cri.go:89] found id: ""
	I0120 14:03:48.155640 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.155653 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:48.155661 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:48.155732 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:48.204162 1971155 cri.go:89] found id: ""
	I0120 14:03:48.204204 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.204219 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:48.204234 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:48.204257 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:48.259987 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:48.260042 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:48.275801 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:48.275832 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:48.361115 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:48.361150 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:48.361170 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:48.443876 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:48.443921 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:50.992981 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:51.009283 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:51.009370 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:51.052492 1971155 cri.go:89] found id: ""
	I0120 14:03:51.052523 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.052533 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:51.052540 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:51.052616 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:51.096548 1971155 cri.go:89] found id: ""
	I0120 14:03:51.096575 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.096583 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:51.096589 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:51.096655 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:51.138339 1971155 cri.go:89] found id: ""
	I0120 14:03:51.138369 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.138378 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:51.138385 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:51.138456 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:51.181155 1971155 cri.go:89] found id: ""
	I0120 14:03:51.181188 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.181198 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:51.181205 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:51.181261 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:51.223988 1971155 cri.go:89] found id: ""
	I0120 14:03:51.224026 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.224038 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:51.224045 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:51.224106 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:51.261863 1971155 cri.go:89] found id: ""
	I0120 14:03:51.261896 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.261905 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:51.261911 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:51.261976 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:51.303816 1971155 cri.go:89] found id: ""
	I0120 14:03:51.303850 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.303862 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:51.303870 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:51.303946 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:51.340897 1971155 cri.go:89] found id: ""
	I0120 14:03:51.340935 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.340946 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:51.340960 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:51.340983 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:51.393462 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:51.393512 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:51.409330 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:51.409361 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:51.483485 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:51.483510 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:51.483525 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:51.560879 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:51.560920 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:50.779106 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:53.278544 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:51.185101 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:53.186284 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:55.186474 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:52.026377 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:54.526778 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:54.106090 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:54.121203 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:54.121282 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:54.171790 1971155 cri.go:89] found id: ""
	I0120 14:03:54.171818 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.171826 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:54.171833 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:54.171888 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:54.215021 1971155 cri.go:89] found id: ""
	I0120 14:03:54.215058 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.215069 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:54.215076 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:54.215138 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:54.252537 1971155 cri.go:89] found id: ""
	I0120 14:03:54.252565 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.252573 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:54.252580 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:54.252635 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:54.291366 1971155 cri.go:89] found id: ""
	I0120 14:03:54.291396 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.291405 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:54.291411 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:54.291482 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:54.328162 1971155 cri.go:89] found id: ""
	I0120 14:03:54.328206 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.328219 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:54.328227 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:54.328310 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:54.366862 1971155 cri.go:89] found id: ""
	I0120 14:03:54.366898 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.366908 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:54.366920 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:54.366996 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:54.404501 1971155 cri.go:89] found id: ""
	I0120 14:03:54.404534 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.404543 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:54.404549 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:54.404609 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:54.443468 1971155 cri.go:89] found id: ""
	I0120 14:03:54.443504 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.443518 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:54.443531 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:54.443554 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:54.458948 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:54.458993 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:54.542353 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:54.542379 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:54.542400 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:54.629014 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:54.629060 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:54.673822 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:54.673857 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:57.228212 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:57.242552 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:57.242667 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:57.282187 1971155 cri.go:89] found id: ""
	I0120 14:03:57.282215 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.282225 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:57.282232 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:57.282306 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:57.319233 1971155 cri.go:89] found id: ""
	I0120 14:03:57.319260 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.319268 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:57.319279 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:57.319340 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:57.356706 1971155 cri.go:89] found id: ""
	I0120 14:03:57.356730 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.356738 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:57.356744 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:57.356805 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:57.396553 1971155 cri.go:89] found id: ""
	I0120 14:03:57.396583 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.396594 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:57.396600 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:57.396657 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:55.783799 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:58.278376 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:57.186658 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:59.686959 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:57.027014 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:59.525725 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:57.434802 1971155 cri.go:89] found id: ""
	I0120 14:03:57.434835 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.434847 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:57.434855 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:57.434927 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:57.471668 1971155 cri.go:89] found id: ""
	I0120 14:03:57.471699 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.471710 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:57.471719 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:57.471789 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:57.512283 1971155 cri.go:89] found id: ""
	I0120 14:03:57.512318 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.512329 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:57.512337 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:57.512409 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:57.549948 1971155 cri.go:89] found id: ""
	I0120 14:03:57.549977 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.549986 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:57.549996 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:57.550010 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:57.639160 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:57.639213 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:57.685920 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:57.685954 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:57.743891 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:57.743935 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:57.760181 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:57.760223 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:57.840777 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:00.342573 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:00.360314 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:00.360397 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:00.407962 1971155 cri.go:89] found id: ""
	I0120 14:04:00.407997 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.408010 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:00.408020 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:00.408086 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:00.450962 1971155 cri.go:89] found id: ""
	I0120 14:04:00.451040 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.451053 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:00.451062 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:00.451129 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:00.487180 1971155 cri.go:89] found id: ""
	I0120 14:04:00.487216 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.487227 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:00.487239 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:00.487311 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:00.530835 1971155 cri.go:89] found id: ""
	I0120 14:04:00.530864 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.530873 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:00.530880 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:00.530948 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:00.570212 1971155 cri.go:89] found id: ""
	I0120 14:04:00.570245 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.570257 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:00.570265 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:00.570335 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:00.611685 1971155 cri.go:89] found id: ""
	I0120 14:04:00.611716 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.611725 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:00.611731 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:00.611785 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:00.649370 1971155 cri.go:89] found id: ""
	I0120 14:04:00.649410 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.649423 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:00.649432 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:00.649498 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:00.685853 1971155 cri.go:89] found id: ""
	I0120 14:04:00.685889 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.685901 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:00.685915 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:00.685930 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:00.737015 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:00.737051 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:00.751682 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:00.751716 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:00.830222 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:00.830247 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:00.830262 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:00.918955 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:00.919003 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:00.279152 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:02.778569 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:02.185020 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:04.185796 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:01.526915 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:03.529074 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:03.461705 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:03.478063 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:03.478144 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:03.525289 1971155 cri.go:89] found id: ""
	I0120 14:04:03.525326 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.525339 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:03.525349 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:03.525427 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:03.565302 1971155 cri.go:89] found id: ""
	I0120 14:04:03.565339 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.565351 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:03.565360 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:03.565441 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:03.607021 1971155 cri.go:89] found id: ""
	I0120 14:04:03.607048 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.607056 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:03.607063 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:03.607122 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:03.650398 1971155 cri.go:89] found id: ""
	I0120 14:04:03.650425 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.650433 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:03.650445 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:03.650513 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:03.689498 1971155 cri.go:89] found id: ""
	I0120 14:04:03.689531 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.689539 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:03.689545 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:03.689607 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:03.726928 1971155 cri.go:89] found id: ""
	I0120 14:04:03.726965 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.726978 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:03.726987 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:03.727054 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:03.764493 1971155 cri.go:89] found id: ""
	I0120 14:04:03.764532 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.764544 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:03.764552 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:03.764622 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:03.803514 1971155 cri.go:89] found id: ""
	I0120 14:04:03.803550 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.803562 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:03.803575 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:03.803595 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:03.847009 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:03.847045 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:03.900078 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:03.900124 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:03.916146 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:03.916179 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:03.988068 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:03.988102 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:03.988121 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:06.568829 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:06.583335 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:06.583422 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:06.628247 1971155 cri.go:89] found id: ""
	I0120 14:04:06.628283 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.628296 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:06.628305 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:06.628365 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:06.673764 1971155 cri.go:89] found id: ""
	I0120 14:04:06.673792 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.673804 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:06.673820 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:06.673892 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:06.714328 1971155 cri.go:89] found id: ""
	I0120 14:04:06.714361 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.714373 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:06.714381 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:06.714458 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:06.750935 1971155 cri.go:89] found id: ""
	I0120 14:04:06.750975 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.750987 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:06.750996 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:06.751061 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:06.788944 1971155 cri.go:89] found id: ""
	I0120 14:04:06.788975 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.788982 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:06.788988 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:06.789056 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:06.826176 1971155 cri.go:89] found id: ""
	I0120 14:04:06.826216 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.826228 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:06.826245 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:06.826322 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:06.864607 1971155 cri.go:89] found id: ""
	I0120 14:04:06.864640 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.864649 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:06.864656 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:06.864741 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:06.901814 1971155 cri.go:89] found id: ""
	I0120 14:04:06.901889 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.901909 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:06.901922 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:06.901944 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:06.953391 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:06.953439 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:06.967876 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:06.967914 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:07.055449 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:07.055486 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:07.055511 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:07.138279 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:07.138328 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:04.778659 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:06.780874 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:06.188401 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:08.685683 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:06.026194 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:08.525961 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:10.527780 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:09.684182 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:09.699353 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:09.699432 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:09.738834 1971155 cri.go:89] found id: ""
	I0120 14:04:09.738864 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.738875 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:09.738883 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:09.738963 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:09.774822 1971155 cri.go:89] found id: ""
	I0120 14:04:09.774852 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.774864 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:09.774872 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:09.774942 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:09.813132 1971155 cri.go:89] found id: ""
	I0120 14:04:09.813167 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.813179 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:09.813187 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:09.813258 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:09.850809 1971155 cri.go:89] found id: ""
	I0120 14:04:09.850844 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.850855 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:09.850864 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:09.850947 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:09.889768 1971155 cri.go:89] found id: ""
	I0120 14:04:09.889802 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.889813 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:09.889821 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:09.889900 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:09.932037 1971155 cri.go:89] found id: ""
	I0120 14:04:09.932073 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.932081 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:09.932087 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:09.932150 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:09.970153 1971155 cri.go:89] found id: ""
	I0120 14:04:09.970197 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.970210 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:09.970218 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:09.970287 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:10.009506 1971155 cri.go:89] found id: ""
	I0120 14:04:10.009535 1971155 logs.go:282] 0 containers: []
	W0120 14:04:10.009544 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:10.009555 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:10.009568 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:10.097837 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:10.097896 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:10.140488 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:10.140534 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:10.195531 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:10.195575 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:10.210277 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:10.210322 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:10.296146 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:09.279024 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:11.279883 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:13.776738 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:11.178584 1969949 pod_ready.go:82] duration metric: took 4m0.000311545s for pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace to be "Ready" ...
	E0120 14:04:11.178646 1969949 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0120 14:04:11.178676 1969949 pod_ready.go:39] duration metric: took 4m14.547669609s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:04:11.178719 1969949 kubeadm.go:597] duration metric: took 4m22.42355041s to restartPrimaryControlPlane
	W0120 14:04:11.178845 1969949 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:04:11.178885 1969949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:04:13.027079 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:15.027945 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:12.796944 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:12.810984 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:12.811085 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:12.849374 1971155 cri.go:89] found id: ""
	I0120 14:04:12.849413 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.849426 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:12.849435 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:12.849509 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:12.885922 1971155 cri.go:89] found id: ""
	I0120 14:04:12.885951 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.885960 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:12.885967 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:12.886039 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:12.922978 1971155 cri.go:89] found id: ""
	I0120 14:04:12.923019 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.923031 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:12.923040 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:12.923108 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:12.960519 1971155 cri.go:89] found id: ""
	I0120 14:04:12.960563 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.960572 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:12.960578 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:12.960688 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:12.997662 1971155 cri.go:89] found id: ""
	I0120 14:04:12.997702 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.997715 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:12.997724 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:12.997803 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:13.035613 1971155 cri.go:89] found id: ""
	I0120 14:04:13.035640 1971155 logs.go:282] 0 containers: []
	W0120 14:04:13.035651 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:13.035660 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:13.035736 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:13.073354 1971155 cri.go:89] found id: ""
	I0120 14:04:13.073389 1971155 logs.go:282] 0 containers: []
	W0120 14:04:13.073401 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:13.073410 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:13.073480 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:13.113735 1971155 cri.go:89] found id: ""
	I0120 14:04:13.113771 1971155 logs.go:282] 0 containers: []
	W0120 14:04:13.113780 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:13.113791 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:13.113804 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:13.170858 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:13.170906 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:13.186341 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:13.186375 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:13.260514 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:13.260540 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:13.260557 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:13.347360 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:13.347411 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:15.891859 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:15.907144 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:15.907238 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:15.943638 1971155 cri.go:89] found id: ""
	I0120 14:04:15.943675 1971155 logs.go:282] 0 containers: []
	W0120 14:04:15.943686 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:15.943693 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:15.943753 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:15.981820 1971155 cri.go:89] found id: ""
	I0120 14:04:15.981868 1971155 logs.go:282] 0 containers: []
	W0120 14:04:15.981882 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:15.981891 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:15.981971 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:16.019987 1971155 cri.go:89] found id: ""
	I0120 14:04:16.020058 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.020071 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:16.020080 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:16.020156 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:16.059245 1971155 cri.go:89] found id: ""
	I0120 14:04:16.059278 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.059288 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:16.059295 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:16.059370 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:16.095081 1971155 cri.go:89] found id: ""
	I0120 14:04:16.095123 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.095136 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:16.095146 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:16.095224 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:16.134357 1971155 cri.go:89] found id: ""
	I0120 14:04:16.134403 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.134416 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:16.134425 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:16.134497 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:16.177729 1971155 cri.go:89] found id: ""
	I0120 14:04:16.177762 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.177774 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:16.177783 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:16.177864 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:16.214324 1971155 cri.go:89] found id: ""
	I0120 14:04:16.214360 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.214371 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:16.214392 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:16.214412 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:16.270670 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:16.270716 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:16.326541 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:16.326587 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:16.343430 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:16.343469 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:16.429522 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:16.429554 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:16.429572 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:15.778836 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:18.279084 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:17.526959 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:20.027030 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:19.008712 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:19.024398 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:19.024489 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:19.064138 1971155 cri.go:89] found id: ""
	I0120 14:04:19.064169 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.064178 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:19.064184 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:19.064253 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:19.102639 1971155 cri.go:89] found id: ""
	I0120 14:04:19.102672 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.102681 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:19.102687 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:19.102755 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:19.141058 1971155 cri.go:89] found id: ""
	I0120 14:04:19.141105 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.141119 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:19.141130 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:19.141200 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:19.179972 1971155 cri.go:89] found id: ""
	I0120 14:04:19.180004 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.180013 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:19.180025 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:19.180095 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:19.219516 1971155 cri.go:89] found id: ""
	I0120 14:04:19.219549 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.219562 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:19.219571 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:19.219634 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:19.262728 1971155 cri.go:89] found id: ""
	I0120 14:04:19.262764 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.262776 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:19.262785 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:19.262860 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:19.299472 1971155 cri.go:89] found id: ""
	I0120 14:04:19.299527 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.299539 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:19.299548 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:19.299634 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:19.341054 1971155 cri.go:89] found id: ""
	I0120 14:04:19.341095 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.341107 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:19.341119 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:19.341133 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:19.426002 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:19.426058 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:19.469471 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:19.469504 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:19.524625 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:19.524661 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:19.539365 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:19.539398 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:19.620545 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:22.122261 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:22.137515 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:22.137590 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:22.177366 1971155 cri.go:89] found id: ""
	I0120 14:04:22.177405 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.177417 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:22.177426 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:22.177494 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:22.215596 1971155 cri.go:89] found id: ""
	I0120 14:04:22.215641 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.215653 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:22.215662 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:22.215734 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:22.252783 1971155 cri.go:89] found id: ""
	I0120 14:04:22.252820 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.252832 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:22.252841 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:22.252917 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:22.295160 1971155 cri.go:89] found id: ""
	I0120 14:04:22.295199 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.295213 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:22.295221 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:22.295284 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:22.334614 1971155 cri.go:89] found id: ""
	I0120 14:04:22.334651 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.334662 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:22.334672 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:22.334754 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:22.372516 1971155 cri.go:89] found id: ""
	I0120 14:04:22.372545 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.372554 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:22.372562 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:22.372633 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:22.412784 1971155 cri.go:89] found id: ""
	I0120 14:04:22.412819 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.412827 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:22.412833 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:22.412895 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:20.778968 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:22.779314 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:22.526513 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:24.527843 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:22.449865 1971155 cri.go:89] found id: ""
	I0120 14:04:22.449900 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.449909 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:22.449920 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:22.449934 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:22.464473 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:22.464514 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:22.546804 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:22.546835 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:22.546858 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:22.624614 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:22.624664 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:22.679053 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:22.679085 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:25.238495 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:25.254177 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:25.254253 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:25.299255 1971155 cri.go:89] found id: ""
	I0120 14:04:25.299291 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.299300 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:25.299308 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:25.299373 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:25.337454 1971155 cri.go:89] found id: ""
	I0120 14:04:25.337481 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.337490 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:25.337496 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:25.337556 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:25.375094 1971155 cri.go:89] found id: ""
	I0120 14:04:25.375129 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.375139 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:25.375148 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:25.375224 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:25.413177 1971155 cri.go:89] found id: ""
	I0120 14:04:25.413206 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.413217 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:25.413223 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:25.413288 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:25.448775 1971155 cri.go:89] found id: ""
	I0120 14:04:25.448812 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.448821 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:25.448827 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:25.448883 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:25.484560 1971155 cri.go:89] found id: ""
	I0120 14:04:25.484591 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.484600 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:25.484607 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:25.484660 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:25.522990 1971155 cri.go:89] found id: ""
	I0120 14:04:25.523029 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.523041 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:25.523049 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:25.523128 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:25.560861 1971155 cri.go:89] found id: ""
	I0120 14:04:25.560899 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.560910 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:25.560925 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:25.560941 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:25.614479 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:25.614528 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:25.630030 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:25.630070 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:25.704721 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:25.704758 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:25.704781 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:25.782265 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:25.782309 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:25.279994 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:27.778659 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:27.027167 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:29.525787 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:28.332905 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:28.351517 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:28.351594 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:28.394070 1971155 cri.go:89] found id: ""
	I0120 14:04:28.394110 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.394122 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:28.394130 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:28.394204 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:28.445893 1971155 cri.go:89] found id: ""
	I0120 14:04:28.445924 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.445934 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:28.445940 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:28.446034 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:28.511766 1971155 cri.go:89] found id: ""
	I0120 14:04:28.511801 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.511811 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:28.511820 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:28.511891 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:28.558333 1971155 cri.go:89] found id: ""
	I0120 14:04:28.558369 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.558382 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:28.558391 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:28.558469 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:28.608161 1971155 cri.go:89] found id: ""
	I0120 14:04:28.608196 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.608207 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:28.608215 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:28.608288 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:28.645545 1971155 cri.go:89] found id: ""
	I0120 14:04:28.645576 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.645585 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:28.645592 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:28.645651 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:28.682795 1971155 cri.go:89] found id: ""
	I0120 14:04:28.682833 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.682845 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:28.682854 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:28.682943 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:28.719887 1971155 cri.go:89] found id: ""
	I0120 14:04:28.719918 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.719928 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:28.719941 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:28.719965 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:28.776644 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:28.776683 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:28.791778 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:28.791812 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:28.870972 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:28.871001 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:28.871027 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:28.950524 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:28.950568 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:31.494786 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:31.508961 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:31.509041 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:31.550239 1971155 cri.go:89] found id: ""
	I0120 14:04:31.550275 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.550287 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:31.550295 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:31.550374 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:31.589113 1971155 cri.go:89] found id: ""
	I0120 14:04:31.589149 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.589161 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:31.589169 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:31.589271 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:31.626500 1971155 cri.go:89] found id: ""
	I0120 14:04:31.626537 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.626547 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:31.626556 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:31.626655 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:31.661941 1971155 cri.go:89] found id: ""
	I0120 14:04:31.661972 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.661980 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:31.661987 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:31.662079 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:31.699223 1971155 cri.go:89] found id: ""
	I0120 14:04:31.699269 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.699283 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:31.699291 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:31.699359 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:31.736559 1971155 cri.go:89] found id: ""
	I0120 14:04:31.736589 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.736601 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:31.736608 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:31.736680 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:31.774254 1971155 cri.go:89] found id: ""
	I0120 14:04:31.774296 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.774304 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:31.774314 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:31.774460 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:31.813913 1971155 cri.go:89] found id: ""
	I0120 14:04:31.813952 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.813964 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:31.813977 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:31.813991 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:31.864887 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:31.864936 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:31.880250 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:31.880286 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:31.955208 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:31.955232 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:31.955247 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:32.039812 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:32.039875 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:29.780496 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:32.277638 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:31.526304 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:33.527156 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:34.582127 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:34.595661 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:34.595751 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:34.637306 1971155 cri.go:89] found id: ""
	I0120 14:04:34.637343 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.637355 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:34.637367 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:34.637440 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:34.676881 1971155 cri.go:89] found id: ""
	I0120 14:04:34.676913 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.676924 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:34.676929 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:34.676985 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:34.715677 1971155 cri.go:89] found id: ""
	I0120 14:04:34.715712 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.715723 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:34.715737 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:34.715801 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:34.754821 1971155 cri.go:89] found id: ""
	I0120 14:04:34.754855 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.754867 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:34.754875 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:34.754947 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:34.793093 1971155 cri.go:89] found id: ""
	I0120 14:04:34.793124 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.793133 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:34.793139 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:34.793200 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:34.830252 1971155 cri.go:89] found id: ""
	I0120 14:04:34.830285 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.830295 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:34.830302 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:34.830370 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:34.869405 1971155 cri.go:89] found id: ""
	I0120 14:04:34.869436 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.869447 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:34.869455 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:34.869528 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:34.910676 1971155 cri.go:89] found id: ""
	I0120 14:04:34.910708 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.910721 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:34.910735 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:34.910751 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:34.961049 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:34.961094 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:34.976224 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:34.976260 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:35.049407 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:35.049434 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:35.049452 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:35.133338 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:35.133396 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:34.279211 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:36.778511 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:39.032716 1969949 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.853801532s)
	I0120 14:04:39.032805 1969949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:04:39.056153 1969949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:04:39.077937 1969949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:04:39.097957 1969949 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:04:39.097986 1969949 kubeadm.go:157] found existing configuration files:
	
	I0120 14:04:39.098074 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:04:39.127178 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:04:39.127249 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:04:39.140640 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:04:39.152447 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:04:39.152516 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:04:39.174543 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:04:39.185436 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:04:39.185521 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:04:39.196720 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:04:39.207028 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:04:39.207105 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:04:39.217474 1969949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:04:39.273124 1969949 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 14:04:39.273208 1969949 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:04:39.402646 1969949 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:04:39.402821 1969949 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:04:39.402964 1969949 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 14:04:39.411696 1969949 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:04:39.413689 1969949 out.go:235]   - Generating certificates and keys ...
	I0120 14:04:39.413807 1969949 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:04:39.413895 1969949 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:04:39.414021 1969949 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:04:39.414131 1969949 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:04:39.414240 1969949 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:04:39.414333 1969949 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:04:39.414455 1969949 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:04:39.414538 1969949 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:04:39.414693 1969949 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:04:39.414814 1969949 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:04:39.414881 1969949 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:04:39.414976 1969949 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:04:39.516867 1969949 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:04:39.700148 1969949 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 14:04:39.838568 1969949 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:04:40.020807 1969949 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:04:40.083569 1969949 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:04:40.083953 1969949 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:04:40.086599 1969949 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:04:40.088383 1969949 out.go:235]   - Booting up control plane ...
	I0120 14:04:40.088515 1969949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:04:40.090041 1969949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:04:40.092450 1969949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:04:40.114859 1969949 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:04:40.124692 1969949 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:04:40.124773 1969949 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:04:36.025541 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:38.027612 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:40.528385 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:37.676133 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:37.690435 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:37.690520 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:37.732788 1971155 cri.go:89] found id: ""
	I0120 14:04:37.732824 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.732837 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:37.732846 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:37.732914 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:37.770338 1971155 cri.go:89] found id: ""
	I0120 14:04:37.770375 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.770387 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:37.770395 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:37.770461 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:37.813580 1971155 cri.go:89] found id: ""
	I0120 14:04:37.813612 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.813639 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:37.813645 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:37.813702 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:37.854706 1971155 cri.go:89] found id: ""
	I0120 14:04:37.854740 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.854751 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:37.854759 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:37.854841 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:37.891577 1971155 cri.go:89] found id: ""
	I0120 14:04:37.891607 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.891616 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:37.891623 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:37.891681 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:37.928718 1971155 cri.go:89] found id: ""
	I0120 14:04:37.928750 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.928762 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:37.928772 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:37.928844 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:37.964166 1971155 cri.go:89] found id: ""
	I0120 14:04:37.964203 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.964211 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:37.964218 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:37.964279 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:38.005257 1971155 cri.go:89] found id: ""
	I0120 14:04:38.005299 1971155 logs.go:282] 0 containers: []
	W0120 14:04:38.005311 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:38.005325 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:38.005340 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:38.058706 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:38.058756 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:38.073507 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:38.073584 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:38.149050 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:38.149073 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:38.149091 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:38.227105 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:38.227163 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:40.772041 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:40.787399 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:40.787471 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:40.828186 1971155 cri.go:89] found id: ""
	I0120 14:04:40.828226 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.828247 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:40.828257 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:40.828327 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:40.869532 1971155 cri.go:89] found id: ""
	I0120 14:04:40.869561 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.869573 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:40.869581 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:40.869670 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:40.916288 1971155 cri.go:89] found id: ""
	I0120 14:04:40.916324 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.916343 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:40.916357 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:40.916425 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:40.953018 1971155 cri.go:89] found id: ""
	I0120 14:04:40.953053 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.953066 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:40.953076 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:40.953150 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:40.993977 1971155 cri.go:89] found id: ""
	I0120 14:04:40.994012 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.994024 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:40.994033 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:40.994104 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:41.037652 1971155 cri.go:89] found id: ""
	I0120 14:04:41.037678 1971155 logs.go:282] 0 containers: []
	W0120 14:04:41.037685 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:41.037692 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:41.037756 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:41.085826 1971155 cri.go:89] found id: ""
	I0120 14:04:41.085855 1971155 logs.go:282] 0 containers: []
	W0120 14:04:41.085864 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:41.085873 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:41.085950 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:41.128902 1971155 cri.go:89] found id: ""
	I0120 14:04:41.128939 1971155 logs.go:282] 0 containers: []
	W0120 14:04:41.128951 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:41.128965 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:41.128984 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:41.182933 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:41.182976 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:41.198454 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:41.198493 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:41.278062 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:41.278090 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:41.278106 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:41.359935 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:41.359983 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:39.279853 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:41.778833 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:43.779056 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:40.281534 1969949 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 14:04:40.281697 1969949 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 14:04:41.283107 1969949 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001641988s
	I0120 14:04:41.283223 1969949 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 14:04:43.026341 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:45.027225 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:46.784985 1969949 kubeadm.go:310] [api-check] The API server is healthy after 5.501686403s
	I0120 14:04:46.800497 1969949 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 14:04:46.826466 1969949 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 14:04:46.872907 1969949 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 14:04:46.873201 1969949 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-648067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 14:04:46.893113 1969949 kubeadm.go:310] [bootstrap-token] Using token: hll471.vkmzt8kk1d060cyb
	I0120 14:04:43.908548 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:43.927397 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:43.927492 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:43.975131 1971155 cri.go:89] found id: ""
	I0120 14:04:43.975160 1971155 logs.go:282] 0 containers: []
	W0120 14:04:43.975169 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:43.975175 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:43.975243 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:44.020970 1971155 cri.go:89] found id: ""
	I0120 14:04:44.021006 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.021018 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:44.021027 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:44.021135 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:44.067873 1971155 cri.go:89] found id: ""
	I0120 14:04:44.067914 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.067927 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:44.067936 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:44.068010 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:44.108047 1971155 cri.go:89] found id: ""
	I0120 14:04:44.108082 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.108093 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:44.108099 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:44.108161 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:44.149416 1971155 cri.go:89] found id: ""
	I0120 14:04:44.149449 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.149458 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:44.149466 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:44.149521 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:44.189664 1971155 cri.go:89] found id: ""
	I0120 14:04:44.189701 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.189712 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:44.189720 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:44.189787 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:44.233518 1971155 cri.go:89] found id: ""
	I0120 14:04:44.233548 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.233558 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:44.233565 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:44.233635 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:44.279568 1971155 cri.go:89] found id: ""
	I0120 14:04:44.279603 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.279614 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:44.279626 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:44.279641 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:44.348693 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:44.348742 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:44.363510 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:44.363546 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:44.437107 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:44.437132 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:44.437146 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:44.516463 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:44.516512 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:47.065723 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:47.081983 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:47.082120 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:47.122906 1971155 cri.go:89] found id: ""
	I0120 14:04:47.122945 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.122958 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:47.122969 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:47.123060 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:47.166879 1971155 cri.go:89] found id: ""
	I0120 14:04:47.166916 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.166928 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:47.166937 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:47.167012 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:47.213675 1971155 cri.go:89] found id: ""
	I0120 14:04:47.213706 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.213715 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:47.213722 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:47.213778 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:47.254655 1971155 cri.go:89] found id: ""
	I0120 14:04:47.254692 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.254702 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:47.254711 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:47.254787 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:47.297680 1971155 cri.go:89] found id: ""
	I0120 14:04:47.297718 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.297731 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:47.297741 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:47.297829 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:47.337150 1971155 cri.go:89] found id: ""
	I0120 14:04:47.337179 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.337188 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:47.337194 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:47.337258 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:47.376190 1971155 cri.go:89] found id: ""
	I0120 14:04:47.376223 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.376234 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:47.376242 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:47.376343 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:47.424425 1971155 cri.go:89] found id: ""
	I0120 14:04:47.424465 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.424477 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:47.424491 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:47.424508 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:46.894672 1969949 out.go:235]   - Configuring RBAC rules ...
	I0120 14:04:46.894865 1969949 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 14:04:46.901221 1969949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 14:04:46.911875 1969949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 14:04:46.916856 1969949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 14:04:46.922245 1969949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 14:04:46.929769 1969949 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 14:04:47.194825 1969949 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 14:04:47.629977 1969949 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 14:04:48.194241 1969949 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 14:04:48.195072 1969949 kubeadm.go:310] 
	I0120 14:04:48.195176 1969949 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 14:04:48.195193 1969949 kubeadm.go:310] 
	I0120 14:04:48.195309 1969949 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 14:04:48.195319 1969949 kubeadm.go:310] 
	I0120 14:04:48.195353 1969949 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 14:04:48.195444 1969949 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 14:04:48.195583 1969949 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 14:04:48.195610 1969949 kubeadm.go:310] 
	I0120 14:04:48.195693 1969949 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 14:04:48.195705 1969949 kubeadm.go:310] 
	I0120 14:04:48.195767 1969949 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 14:04:48.195776 1969949 kubeadm.go:310] 
	I0120 14:04:48.195891 1969949 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 14:04:48.196003 1969949 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 14:04:48.196119 1969949 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 14:04:48.196143 1969949 kubeadm.go:310] 
	I0120 14:04:48.196264 1969949 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 14:04:48.196353 1969949 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 14:04:48.196374 1969949 kubeadm.go:310] 
	I0120 14:04:48.196486 1969949 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hll471.vkmzt8kk1d060cyb \
	I0120 14:04:48.196623 1969949 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 \
	I0120 14:04:48.196658 1969949 kubeadm.go:310] 	--control-plane 
	I0120 14:04:48.196668 1969949 kubeadm.go:310] 
	I0120 14:04:48.196788 1969949 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 14:04:48.196797 1969949 kubeadm.go:310] 
	I0120 14:04:48.196887 1969949 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hll471.vkmzt8kk1d060cyb \
	I0120 14:04:48.196999 1969949 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 
	I0120 14:04:48.198034 1969949 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:04:48.198074 1969949 cni.go:84] Creating CNI manager for ""
	I0120 14:04:48.198087 1969949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:04:48.199935 1969949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:04:46.278851 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:48.279224 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:48.201356 1969949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:04:48.213317 1969949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:04:48.232194 1969949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:04:48.232407 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-648067 minikube.k8s.io/updated_at=2025_01_20T14_04_48_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=no-preload-648067 minikube.k8s.io/primary=true
	I0120 14:04:48.232407 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:48.270777 1969949 ops.go:34] apiserver oom_adj: -16
	I0120 14:04:48.458517 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:48.959588 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:49.459308 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:49.958914 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:47.529098 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:50.025867 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:47.439773 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:47.439807 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:47.515012 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:47.515040 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:47.515077 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:47.602215 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:47.602253 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:47.647880 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:47.647910 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:50.211849 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:50.225773 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:50.225855 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:50.268626 1971155 cri.go:89] found id: ""
	I0120 14:04:50.268663 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.268676 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:50.268686 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:50.268759 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:50.307523 1971155 cri.go:89] found id: ""
	I0120 14:04:50.307562 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.307575 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:50.307584 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:50.307656 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:50.347783 1971155 cri.go:89] found id: ""
	I0120 14:04:50.347820 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.347832 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:50.347840 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:50.347910 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:50.394427 1971155 cri.go:89] found id: ""
	I0120 14:04:50.394462 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.394474 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:50.394482 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:50.394564 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:50.434136 1971155 cri.go:89] found id: ""
	I0120 14:04:50.434168 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.434178 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:50.434187 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:50.434253 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:50.472220 1971155 cri.go:89] found id: ""
	I0120 14:04:50.472256 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.472268 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:50.472277 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:50.472341 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:50.513511 1971155 cri.go:89] found id: ""
	I0120 14:04:50.513541 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.513552 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:50.513560 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:50.513630 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:50.551073 1971155 cri.go:89] found id: ""
	I0120 14:04:50.551110 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.551121 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:50.551143 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:50.551163 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:50.565714 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:50.565744 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:50.651186 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:50.651214 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:50.651238 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:50.735185 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:50.735234 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:50.780258 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:50.780287 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:50.459078 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:50.958680 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:51.459194 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:51.958693 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:52.459624 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:52.569627 1969949 kubeadm.go:1113] duration metric: took 4.337296975s to wait for elevateKubeSystemPrivileges
	I0120 14:04:52.569667 1969949 kubeadm.go:394] duration metric: took 5m3.880867579s to StartCluster
	I0120 14:04:52.569696 1969949 settings.go:142] acquiring lock: {Name:mka51395421b81550a6a78e164d8aa21a9ede347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:04:52.569799 1969949 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:04:52.571249 1969949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:04:52.571569 1969949 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:04:52.571705 1969949 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:04:52.571794 1969949 addons.go:69] Setting storage-provisioner=true in profile "no-preload-648067"
	I0120 14:04:52.571819 1969949 addons.go:238] Setting addon storage-provisioner=true in "no-preload-648067"
	I0120 14:04:52.571819 1969949 addons.go:69] Setting default-storageclass=true in profile "no-preload-648067"
	W0120 14:04:52.571832 1969949 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:04:52.571833 1969949 addons.go:69] Setting metrics-server=true in profile "no-preload-648067"
	I0120 14:04:52.571850 1969949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-648067"
	I0120 14:04:52.571858 1969949 addons.go:238] Setting addon metrics-server=true in "no-preload-648067"
	W0120 14:04:52.571867 1969949 addons.go:247] addon metrics-server should already be in state true
	I0120 14:04:52.571861 1969949 addons.go:69] Setting dashboard=true in profile "no-preload-648067"
	I0120 14:04:52.571895 1969949 host.go:66] Checking if "no-preload-648067" exists ...
	I0120 14:04:52.571904 1969949 addons.go:238] Setting addon dashboard=true in "no-preload-648067"
	W0120 14:04:52.571919 1969949 addons.go:247] addon dashboard should already be in state true
	I0120 14:04:52.571873 1969949 host.go:66] Checking if "no-preload-648067" exists ...
	I0120 14:04:52.571957 1969949 host.go:66] Checking if "no-preload-648067" exists ...
	I0120 14:04:52.571816 1969949 config.go:182] Loaded profile config "no-preload-648067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:04:52.572249 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.572310 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.572388 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.572402 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.572429 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.572437 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.572388 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.572514 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.573278 1969949 out.go:177] * Verifying Kubernetes components...
	I0120 14:04:52.574697 1969949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:04:52.593445 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35109
	I0120 14:04:52.593972 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40497
	I0120 14:04:52.594196 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38403
	I0120 14:04:52.594251 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.594311 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0120 14:04:52.594456 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.594699 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.594819 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.595051 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.595058 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.595072 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.595075 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.595878 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.595883 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.595967 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.595978 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.595992 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.595994 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.596089 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.596460 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.596493 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.596495 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.596537 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.597392 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.597458 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.597937 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.597987 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.601273 1969949 addons.go:238] Setting addon default-storageclass=true in "no-preload-648067"
	W0120 14:04:52.601293 1969949 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:04:52.601328 1969949 host.go:66] Checking if "no-preload-648067" exists ...
	I0120 14:04:52.601665 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.601709 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.615800 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36643
	I0120 14:04:52.616400 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.617008 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.617030 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.617408 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.617522 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
	I0120 14:04:52.617864 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.618536 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.619193 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.619209 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.619284 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41705
	I0120 14:04:52.619647 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.619726 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.619909 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.620278 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.620296 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.620825 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.620943 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0120 14:04:52.621206 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.622123 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 14:04:52.622176 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.622220 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 14:04:52.623015 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 14:04:52.623665 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.623691 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.624470 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.625095 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.625143 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.625528 1969949 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:04:52.625540 1969949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:04:52.625550 1969949 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:04:52.627935 1969949 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:04:50.279663 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:52.280483 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:52.627964 1969949 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:04:52.627983 1969949 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:04:52.628010 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 14:04:52.628135 1969949 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:04:52.628150 1969949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:04:52.628172 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 14:04:52.629358 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:04:52.629377 1969949 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:04:52.629400 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 14:04:52.632446 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.633059 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.633123 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 14:04:52.633132 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 14:04:52.633166 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.633329 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 14:04:52.633372 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 14:04:52.633419 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.633507 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 14:04:52.633561 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 14:04:52.633761 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 14:04:52.634098 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.634129 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 14:04:52.634291 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 14:04:52.634635 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 14:04:52.634792 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 14:04:52.634816 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.635030 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 14:04:52.635288 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 14:04:52.635523 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 14:04:52.635673 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 14:04:52.649363 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46053
	I0120 14:04:52.649962 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.650624 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.650650 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.651046 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.651360 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.653362 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 14:04:52.653620 1969949 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:04:52.653637 1969949 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:04:52.653657 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 14:04:52.656950 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.657430 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 14:04:52.657459 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.657671 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 14:04:52.658472 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 14:04:52.658685 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 14:04:52.658860 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 14:04:52.827213 1969949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:04:52.892209 1969949 node_ready.go:35] waiting up to 6m0s for node "no-preload-648067" to be "Ready" ...
	I0120 14:04:52.927742 1969949 node_ready.go:49] node "no-preload-648067" has status "Ready":"True"
	I0120 14:04:52.927778 1969949 node_ready.go:38] duration metric: took 35.520382ms for node "no-preload-648067" to be "Ready" ...
	I0120 14:04:52.927792 1969949 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:04:52.945134 1969949 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace to be "Ready" ...
	I0120 14:04:52.998630 1969949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:04:53.015208 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:04:53.015251 1969949 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:04:53.050964 1969949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:04:53.053498 1969949 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:04:53.053531 1969949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:04:53.131884 1969949 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:04:53.131915 1969949 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:04:53.156697 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:04:53.156734 1969949 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:04:53.267300 1969949 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:04:53.267329 1969949 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:04:53.267739 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:04:53.267765 1969949 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:04:53.452299 1969949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:04:53.456705 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:53.456735 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:53.457124 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:53.457209 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:53.457135 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:53.457264 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:53.457356 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:53.457651 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:53.457667 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:53.461528 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:04:53.461555 1969949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:04:53.471471 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:53.471505 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:53.471848 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:53.471864 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:53.515363 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:04:53.515398 1969949 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 14:04:53.636963 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:04:53.637001 1969949 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:04:53.840979 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:04:53.841011 1969949 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:04:53.959045 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:04:53.959082 1969949 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 14:04:54.051582 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:04:54.051618 1969949 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 14:04:54.170664 1969949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:04:54.682801 1969949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.631779213s)
	I0120 14:04:54.682872 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:54.682887 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:54.683248 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:54.683271 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:54.683286 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:54.683296 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:54.683571 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:54.683595 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:54.683577 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:54.982997 1969949 pod_ready.go:103] pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:55.132956 1969949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.680599793s)
	I0120 14:04:55.133021 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:55.133038 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:55.133526 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:55.133549 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:55.133560 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:55.133568 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:55.133526 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:55.133807 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:55.133831 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:55.133847 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:55.133867 1969949 addons.go:479] Verifying addon metrics-server=true in "no-preload-648067"
	I0120 14:04:52.026070 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:54.026722 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:55.971683 1969949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.800920116s)
	I0120 14:04:55.971747 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:55.971763 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:55.972123 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:55.972144 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:55.972155 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:55.972163 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:55.972460 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:55.973844 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:55.973867 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:55.975729 1969949 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-648067 addons enable metrics-server
	
	I0120 14:04:55.977469 1969949 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 14:04:53.331081 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:53.346851 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:53.346935 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:53.390862 1971155 cri.go:89] found id: ""
	I0120 14:04:53.390901 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.390915 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:53.390924 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:53.391007 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:53.433455 1971155 cri.go:89] found id: ""
	I0120 14:04:53.433482 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.433491 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:53.433497 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:53.433555 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:53.477771 1971155 cri.go:89] found id: ""
	I0120 14:04:53.477805 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.477817 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:53.477826 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:53.477898 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:53.518330 1971155 cri.go:89] found id: ""
	I0120 14:04:53.518365 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.518375 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:53.518384 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:53.518461 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:53.557755 1971155 cri.go:89] found id: ""
	I0120 14:04:53.557804 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.557817 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:53.557827 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:53.557907 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:53.600681 1971155 cri.go:89] found id: ""
	I0120 14:04:53.600718 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.600730 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:53.600739 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:53.600836 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:53.644255 1971155 cri.go:89] found id: ""
	I0120 14:04:53.644291 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.644302 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:53.644311 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:53.644398 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:53.681445 1971155 cri.go:89] found id: ""
	I0120 14:04:53.681485 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.681498 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:53.681513 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:53.681529 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:53.737076 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:53.737131 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:53.755500 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:53.755551 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:53.846378 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:53.846416 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:53.846435 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:53.956291 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:53.956337 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:56.505456 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:56.521259 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:56.521352 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:56.572379 1971155 cri.go:89] found id: ""
	I0120 14:04:56.572415 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.572427 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:56.572435 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:56.572503 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:56.613123 1971155 cri.go:89] found id: ""
	I0120 14:04:56.613151 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.613162 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:56.613170 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:56.613237 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:56.650863 1971155 cri.go:89] found id: ""
	I0120 14:04:56.650896 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.650904 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:56.650911 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:56.650967 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:56.686709 1971155 cri.go:89] found id: ""
	I0120 14:04:56.686741 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.686749 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:56.686756 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:56.686813 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:56.722765 1971155 cri.go:89] found id: ""
	I0120 14:04:56.722794 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.722802 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:56.722809 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:56.722867 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:56.762188 1971155 cri.go:89] found id: ""
	I0120 14:04:56.762223 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.762235 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:56.762244 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:56.762321 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:56.807714 1971155 cri.go:89] found id: ""
	I0120 14:04:56.807741 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.807750 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:56.807756 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:56.807818 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:56.846817 1971155 cri.go:89] found id: ""
	I0120 14:04:56.846851 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.846860 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:56.846870 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:56.846884 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:56.919562 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:56.919593 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:56.919613 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:57.007957 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:57.008011 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:57.051295 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:57.051339 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:57.104114 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:57.104172 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:54.779036 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:56.272135 1970602 pod_ready.go:82] duration metric: took 4m0.000512351s for pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace to be "Ready" ...
	E0120 14:04:56.272179 1970602 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 14:04:56.272203 1970602 pod_ready.go:39] duration metric: took 4m14.631982517s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:04:56.272284 1970602 kubeadm.go:597] duration metric: took 4m21.961665482s to restartPrimaryControlPlane
	W0120 14:04:56.272373 1970602 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:04:56.272404 1970602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:04:55.979014 1969949 addons.go:514] duration metric: took 3.407316682s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 14:04:57.451990 1969949 pod_ready.go:103] pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:59.452924 1969949 pod_ready.go:103] pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:56.527827 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:59.026535 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:59.620229 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:59.637010 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:59.637114 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:59.680984 1971155 cri.go:89] found id: ""
	I0120 14:04:59.681020 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.681032 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:59.681041 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:59.681128 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:59.725445 1971155 cri.go:89] found id: ""
	I0120 14:04:59.725480 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.725492 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:59.725501 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:59.725573 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:59.767962 1971155 cri.go:89] found id: ""
	I0120 14:04:59.767999 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.768012 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:59.768020 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:59.768091 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:59.812201 1971155 cri.go:89] found id: ""
	I0120 14:04:59.812240 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.812252 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:59.812267 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:59.812335 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:59.853005 1971155 cri.go:89] found id: ""
	I0120 14:04:59.853034 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.853043 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:59.853049 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:59.853131 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:59.890747 1971155 cri.go:89] found id: ""
	I0120 14:04:59.890859 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.890878 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:59.890889 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:59.890969 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:59.934050 1971155 cri.go:89] found id: ""
	I0120 14:04:59.934090 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.934102 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:59.934110 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:59.934179 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:59.977069 1971155 cri.go:89] found id: ""
	I0120 14:04:59.977106 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.977119 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:59.977131 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:59.977150 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:00.070208 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:00.070261 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:00.116521 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:00.116557 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:00.175645 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:00.175695 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:00.192183 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:00.192228 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:00.273233 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:01.452480 1969949 pod_ready.go:93] pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.452519 1969949 pod_ready.go:82] duration metric: took 8.507352286s for pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.452534 1969949 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-86xhz" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.458456 1969949 pod_ready.go:93] pod "coredns-668d6bf9bc-86xhz" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.458488 1969949 pod_ready.go:82] duration metric: took 5.941966ms for pod "coredns-668d6bf9bc-86xhz" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.458503 1969949 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.465708 1969949 pod_ready.go:93] pod "etcd-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.465733 1969949 pod_ready.go:82] duration metric: took 7.221959ms for pod "etcd-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.465745 1969949 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.473764 1969949 pod_ready.go:93] pod "kube-apiserver-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.473796 1969949 pod_ready.go:82] duration metric: took 8.041648ms for pod "kube-apiserver-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.473815 1969949 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.480463 1969949 pod_ready.go:93] pod "kube-controller-manager-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.480494 1969949 pod_ready.go:82] duration metric: took 6.670074ms for pod "kube-controller-manager-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.480508 1969949 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kr6tq" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.849787 1969949 pod_ready.go:93] pod "kube-proxy-kr6tq" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.849820 1969949 pod_ready.go:82] duration metric: took 369.302403ms for pod "kube-proxy-kr6tq" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.849834 1969949 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:02.250242 1969949 pod_ready.go:93] pod "kube-scheduler-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:02.250279 1969949 pod_ready.go:82] duration metric: took 400.436958ms for pod "kube-scheduler-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:02.250289 1969949 pod_ready.go:39] duration metric: took 9.322472589s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:02.250305 1969949 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:05:02.250373 1969949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:02.307690 1969949 api_server.go:72] duration metric: took 9.736077102s to wait for apiserver process to appear ...
	I0120 14:05:02.307725 1969949 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:05:02.307751 1969949 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0120 14:05:02.312837 1969949 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0120 14:05:02.314012 1969949 api_server.go:141] control plane version: v1.32.0
	I0120 14:05:02.314038 1969949 api_server.go:131] duration metric: took 6.305469ms to wait for apiserver health ...
	I0120 14:05:02.314047 1969949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:05:02.454048 1969949 system_pods.go:59] 9 kube-system pods found
	I0120 14:05:02.454092 1969949 system_pods.go:61] "coredns-668d6bf9bc-2fbd7" [d2cf52fe-b375-47cd-a4bf-d7ef8e07a4b7] Running
	I0120 14:05:02.454099 1969949 system_pods.go:61] "coredns-668d6bf9bc-86xhz" [4af72226-8186-40e7-a923-01381cc52731] Running
	I0120 14:05:02.454104 1969949 system_pods.go:61] "etcd-no-preload-648067" [87debb8b-80bc-41cc-91f3-7b905ab8177c] Running
	I0120 14:05:02.454109 1969949 system_pods.go:61] "kube-apiserver-no-preload-648067" [6b1f5f1b-67ae-4ab2-a186-1c5224fcbc4e] Running
	I0120 14:05:02.454114 1969949 system_pods.go:61] "kube-controller-manager-no-preload-648067" [1bf90869-71a8-4459-a1b8-b59f78af8a8b] Running
	I0120 14:05:02.454119 1969949 system_pods.go:61] "kube-proxy-kr6tq" [462ab3d1-c225-4319-bac8-926a1e43a14d] Running
	I0120 14:05:02.454125 1969949 system_pods.go:61] "kube-scheduler-no-preload-648067" [38edfe65-9c58-4a24-b108-c22846010b97] Running
	I0120 14:05:02.454136 1969949 system_pods.go:61] "metrics-server-f79f97bbb-9kb5f" [fb8dd9df-cd37-4779-af22-4abd91dbc421] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:05:02.454144 1969949 system_pods.go:61] "storage-provisioner" [12bde765-1258-4689-b448-64208dd30638] Running
	I0120 14:05:02.454158 1969949 system_pods.go:74] duration metric: took 140.103109ms to wait for pod list to return data ...
	I0120 14:05:02.454172 1969949 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:05:02.650007 1969949 default_sa.go:45] found service account: "default"
	I0120 14:05:02.650050 1969949 default_sa.go:55] duration metric: took 195.869128ms for default service account to be created ...
	I0120 14:05:02.650064 1969949 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:05:02.853144 1969949 system_pods.go:87] 9 kube-system pods found
	I0120 14:05:01.028886 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:03.526512 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:05.527941 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:02.773877 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:02.788560 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:02.788661 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:02.838025 1971155 cri.go:89] found id: ""
	I0120 14:05:02.838061 1971155 logs.go:282] 0 containers: []
	W0120 14:05:02.838073 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:02.838082 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:02.838152 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:02.879106 1971155 cri.go:89] found id: ""
	I0120 14:05:02.879139 1971155 logs.go:282] 0 containers: []
	W0120 14:05:02.879150 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:02.879158 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:02.879226 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:02.919842 1971155 cri.go:89] found id: ""
	I0120 14:05:02.919883 1971155 logs.go:282] 0 containers: []
	W0120 14:05:02.919896 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:02.919905 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:02.919978 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:02.959612 1971155 cri.go:89] found id: ""
	I0120 14:05:02.959644 1971155 logs.go:282] 0 containers: []
	W0120 14:05:02.959656 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:02.959664 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:02.959737 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:03.018360 1971155 cri.go:89] found id: ""
	I0120 14:05:03.018392 1971155 logs.go:282] 0 containers: []
	W0120 14:05:03.018401 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:03.018408 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:03.018491 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:03.064749 1971155 cri.go:89] found id: ""
	I0120 14:05:03.064779 1971155 logs.go:282] 0 containers: []
	W0120 14:05:03.064788 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:03.064801 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:03.064874 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:03.114566 1971155 cri.go:89] found id: ""
	I0120 14:05:03.114595 1971155 logs.go:282] 0 containers: []
	W0120 14:05:03.114617 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:03.114626 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:03.114695 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:03.163672 1971155 cri.go:89] found id: ""
	I0120 14:05:03.163707 1971155 logs.go:282] 0 containers: []
	W0120 14:05:03.163720 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:03.163733 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:03.163750 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:03.243662 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:03.243718 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:03.261586 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:03.261629 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:03.358343 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:03.358377 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:03.358393 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:03.452803 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:03.452852 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:06.004224 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:06.019368 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:06.019459 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:06.068617 1971155 cri.go:89] found id: ""
	I0120 14:05:06.068655 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.068668 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:06.068678 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:06.068747 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:06.112806 1971155 cri.go:89] found id: ""
	I0120 14:05:06.112859 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.112874 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:06.112883 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:06.112960 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:06.150653 1971155 cri.go:89] found id: ""
	I0120 14:05:06.150695 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.150708 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:06.150716 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:06.150788 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:06.190915 1971155 cri.go:89] found id: ""
	I0120 14:05:06.190958 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.190973 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:06.190992 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:06.191077 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:06.237577 1971155 cri.go:89] found id: ""
	I0120 14:05:06.237616 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.237627 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:06.237636 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:06.237712 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:06.280826 1971155 cri.go:89] found id: ""
	I0120 14:05:06.280861 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.280873 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:06.280883 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:06.280958 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:06.317836 1971155 cri.go:89] found id: ""
	I0120 14:05:06.317872 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.317883 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:06.317892 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:06.317962 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:06.365531 1971155 cri.go:89] found id: ""
	I0120 14:05:06.365574 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.365587 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:06.365601 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:06.365626 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:06.460369 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:06.460403 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:06.460422 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:06.541919 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:06.541967 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:06.588755 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:06.588805 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:06.648087 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:06.648140 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:08.026139 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:10.026227 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:09.166758 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:09.184071 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:09.184193 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:09.222998 1971155 cri.go:89] found id: ""
	I0120 14:05:09.223035 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.223048 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:09.223056 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:09.223140 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:09.275875 1971155 cri.go:89] found id: ""
	I0120 14:05:09.275912 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.275926 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:09.275934 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:09.276006 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:09.320157 1971155 cri.go:89] found id: ""
	I0120 14:05:09.320192 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.320210 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:09.320218 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:09.320309 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:09.366463 1971155 cri.go:89] found id: ""
	I0120 14:05:09.366496 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.366505 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:09.366511 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:09.366582 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:09.414645 1971155 cri.go:89] found id: ""
	I0120 14:05:09.414675 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.414683 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:09.414689 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:09.414758 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:09.474004 1971155 cri.go:89] found id: ""
	I0120 14:05:09.474047 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.474059 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:09.474068 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:09.474153 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:09.536187 1971155 cri.go:89] found id: ""
	I0120 14:05:09.536217 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.536224 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:09.536230 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:09.536289 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:09.574100 1971155 cri.go:89] found id: ""
	I0120 14:05:09.574134 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.574142 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:09.574154 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:09.574167 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:09.620881 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:09.620923 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:09.676117 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:09.676177 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:09.692431 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:09.692473 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:09.768800 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:09.768831 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:09.768851 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:12.350771 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:12.365286 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:12.365374 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:12.402924 1971155 cri.go:89] found id: ""
	I0120 14:05:12.402966 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.402978 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:12.402998 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:12.403073 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:12.027431 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:14.526570 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:12.442108 1971155 cri.go:89] found id: ""
	I0120 14:05:12.442138 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.442147 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:12.442154 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:12.442211 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:12.484002 1971155 cri.go:89] found id: ""
	I0120 14:05:12.484058 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.484071 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:12.484078 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:12.484149 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:12.524060 1971155 cri.go:89] found id: ""
	I0120 14:05:12.524097 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.524109 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:12.524118 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:12.524201 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:12.563120 1971155 cri.go:89] found id: ""
	I0120 14:05:12.563147 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.563156 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:12.563163 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:12.563232 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:12.604782 1971155 cri.go:89] found id: ""
	I0120 14:05:12.604824 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.604838 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:12.604847 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:12.604925 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:12.642278 1971155 cri.go:89] found id: ""
	I0120 14:05:12.642305 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.642316 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:12.642326 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:12.642391 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:12.682274 1971155 cri.go:89] found id: ""
	I0120 14:05:12.682311 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.682323 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:12.682337 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:12.682353 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:12.773735 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:12.773785 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:12.825008 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:12.825049 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:12.873999 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:12.874042 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:12.888767 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:12.888804 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:12.965739 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:15.466957 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:15.493756 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:15.493839 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:15.538680 1971155 cri.go:89] found id: ""
	I0120 14:05:15.538709 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.538717 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:15.538724 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:15.538783 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:15.583029 1971155 cri.go:89] found id: ""
	I0120 14:05:15.583069 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.583081 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:15.583089 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:15.583174 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:15.623762 1971155 cri.go:89] found id: ""
	I0120 14:05:15.623801 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.623815 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:15.623825 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:15.623903 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:15.663883 1971155 cri.go:89] found id: ""
	I0120 14:05:15.663921 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.663930 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:15.663938 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:15.664013 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:15.701723 1971155 cri.go:89] found id: ""
	I0120 14:05:15.701758 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.701769 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:15.701778 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:15.701847 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:15.741612 1971155 cri.go:89] found id: ""
	I0120 14:05:15.741649 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.741661 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:15.741670 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:15.741736 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:15.783225 1971155 cri.go:89] found id: ""
	I0120 14:05:15.783257 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.783267 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:15.783275 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:15.783353 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:15.823664 1971155 cri.go:89] found id: ""
	I0120 14:05:15.823699 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.823713 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:15.823725 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:15.823740 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:15.876890 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:15.876936 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:15.892034 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:15.892077 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:15.967939 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:15.967966 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:15.967982 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:16.049913 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:16.049961 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:16.527187 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:19.028271 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:18.599849 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:18.613686 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:18.613756 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:18.656070 1971155 cri.go:89] found id: ""
	I0120 14:05:18.656104 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.656113 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:18.656120 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:18.656184 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:18.694391 1971155 cri.go:89] found id: ""
	I0120 14:05:18.694420 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.694429 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:18.694435 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:18.694499 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:18.733057 1971155 cri.go:89] found id: ""
	I0120 14:05:18.733094 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.733107 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:18.733114 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:18.733187 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:18.770955 1971155 cri.go:89] found id: ""
	I0120 14:05:18.770985 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.770993 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:18.770998 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:18.771065 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:18.805878 1971155 cri.go:89] found id: ""
	I0120 14:05:18.805912 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.805924 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:18.805932 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:18.806015 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:18.843859 1971155 cri.go:89] found id: ""
	I0120 14:05:18.843891 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.843904 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:18.843912 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:18.843981 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:18.882554 1971155 cri.go:89] found id: ""
	I0120 14:05:18.882585 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.882594 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:18.882622 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:18.882686 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:18.919206 1971155 cri.go:89] found id: ""
	I0120 14:05:18.919242 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.919258 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:18.919269 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:18.919284 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:18.969428 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:18.969476 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:18.984666 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:18.984702 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:19.060472 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:19.060502 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:19.060517 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:19.136205 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:19.136248 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:21.681437 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:21.695755 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:21.695840 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:21.732554 1971155 cri.go:89] found id: ""
	I0120 14:05:21.732587 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.732599 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:21.732609 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:21.732680 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:21.771047 1971155 cri.go:89] found id: ""
	I0120 14:05:21.771078 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.771087 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:21.771093 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:21.771149 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:21.806053 1971155 cri.go:89] found id: ""
	I0120 14:05:21.806084 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.806096 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:21.806104 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:21.806176 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:21.843647 1971155 cri.go:89] found id: ""
	I0120 14:05:21.843679 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.843692 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:21.843699 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:21.843767 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:21.878399 1971155 cri.go:89] found id: ""
	I0120 14:05:21.878437 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.878449 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:21.878458 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:21.878531 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:21.912712 1971155 cri.go:89] found id: ""
	I0120 14:05:21.912746 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.912757 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:21.912770 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:21.912842 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:21.948182 1971155 cri.go:89] found id: ""
	I0120 14:05:21.948214 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.948225 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:21.948241 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:21.948311 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:21.987907 1971155 cri.go:89] found id: ""
	I0120 14:05:21.987945 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.987954 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:21.987964 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:21.987977 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:22.037198 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:22.037244 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:22.053238 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:22.053293 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:22.125680 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:22.125703 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:22.125721 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:22.208323 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:22.208371 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:21.529531 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:24.025073 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:24.752796 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:24.769865 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:24.769967 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:24.809247 1971155 cri.go:89] found id: ""
	I0120 14:05:24.809282 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.809293 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:24.809305 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:24.809378 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:24.849761 1971155 cri.go:89] found id: ""
	I0120 14:05:24.849788 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.849797 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:24.849803 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:24.849867 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:24.892195 1971155 cri.go:89] found id: ""
	I0120 14:05:24.892226 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.892239 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:24.892249 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:24.892315 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:24.935367 1971155 cri.go:89] found id: ""
	I0120 14:05:24.935400 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.935412 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:24.935420 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:24.935488 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:24.980132 1971155 cri.go:89] found id: ""
	I0120 14:05:24.980164 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.980179 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:24.980188 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:24.980269 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:25.017365 1971155 cri.go:89] found id: ""
	I0120 14:05:25.017394 1971155 logs.go:282] 0 containers: []
	W0120 14:05:25.017405 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:25.017413 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:25.017487 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:25.059078 1971155 cri.go:89] found id: ""
	I0120 14:05:25.059115 1971155 logs.go:282] 0 containers: []
	W0120 14:05:25.059127 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:25.059163 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:25.059276 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:25.099507 1971155 cri.go:89] found id: ""
	I0120 14:05:25.099545 1971155 logs.go:282] 0 containers: []
	W0120 14:05:25.099557 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:25.099571 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:25.099588 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:25.174356 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:25.174385 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:25.174412 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:25.260260 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:25.260303 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:25.304309 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:25.304342 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:25.358340 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:25.358388 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:24.178761 1970602 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.906332562s)
	I0120 14:05:24.178859 1970602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:05:24.194902 1970602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:05:24.206080 1970602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:05:24.217371 1970602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:05:24.217398 1970602 kubeadm.go:157] found existing configuration files:
	
	I0120 14:05:24.217448 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:05:24.227549 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:05:24.227627 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:05:24.238584 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:05:24.249016 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:05:24.249171 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:05:24.260537 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:05:24.270728 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:05:24.270792 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:05:24.281345 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:05:24.291266 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:05:24.291344 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:05:24.302258 1970602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:05:24.477322 1970602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:05:26.026356 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:28.027425 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:30.525634 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:27.876603 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:27.892994 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:27.893071 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:27.931991 1971155 cri.go:89] found id: ""
	I0120 14:05:27.932048 1971155 logs.go:282] 0 containers: []
	W0120 14:05:27.932060 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:27.932068 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:27.932139 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:27.968882 1971155 cri.go:89] found id: ""
	I0120 14:05:27.968917 1971155 logs.go:282] 0 containers: []
	W0120 14:05:27.968926 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:27.968933 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:27.968998 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:28.009604 1971155 cri.go:89] found id: ""
	I0120 14:05:28.009635 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.009644 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:28.009650 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:28.009708 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:28.050036 1971155 cri.go:89] found id: ""
	I0120 14:05:28.050069 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.050080 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:28.050087 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:28.050156 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:28.092348 1971155 cri.go:89] found id: ""
	I0120 14:05:28.092392 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.092427 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:28.092436 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:28.092512 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:28.133751 1971155 cri.go:89] found id: ""
	I0120 14:05:28.133787 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.133796 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:28.133804 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:28.133875 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:28.177231 1971155 cri.go:89] found id: ""
	I0120 14:05:28.177268 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.177280 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:28.177288 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:28.177382 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:28.217125 1971155 cri.go:89] found id: ""
	I0120 14:05:28.217160 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.217175 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:28.217189 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:28.217207 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:28.305446 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:28.305480 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:28.305498 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:28.389940 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:28.389996 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:28.445472 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:28.445519 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:28.503281 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:28.503343 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:31.023457 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:31.039576 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:31.039665 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:31.090049 1971155 cri.go:89] found id: ""
	I0120 14:05:31.090086 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.090099 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:31.090108 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:31.090199 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:31.129134 1971155 cri.go:89] found id: ""
	I0120 14:05:31.129168 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.129180 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:31.129189 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:31.129246 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:31.169790 1971155 cri.go:89] found id: ""
	I0120 14:05:31.169822 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.169834 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:31.169845 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:31.169940 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:31.210981 1971155 cri.go:89] found id: ""
	I0120 14:05:31.211017 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.211030 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:31.211039 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:31.211126 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:31.254051 1971155 cri.go:89] found id: ""
	I0120 14:05:31.254081 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.254089 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:31.254096 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:31.254175 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:31.301717 1971155 cri.go:89] found id: ""
	I0120 14:05:31.301750 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.301772 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:31.301782 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:31.301847 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:31.343204 1971155 cri.go:89] found id: ""
	I0120 14:05:31.343233 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.343242 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:31.343248 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:31.343304 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:31.382466 1971155 cri.go:89] found id: ""
	I0120 14:05:31.382501 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.382512 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:31.382525 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:31.382544 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:31.461732 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:31.461781 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:31.461801 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:31.559483 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:31.559566 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:31.606795 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:31.606833 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:31.661423 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:31.661468 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:33.376770 1970602 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 14:05:33.376853 1970602 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:05:33.376989 1970602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:05:33.377149 1970602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:05:33.377293 1970602 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 14:05:33.377400 1970602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:05:33.378924 1970602 out.go:235]   - Generating certificates and keys ...
	I0120 14:05:33.379025 1970602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:05:33.379104 1970602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:05:33.379208 1970602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:05:33.379307 1970602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:05:33.379417 1970602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:05:33.379524 1970602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:05:33.379607 1970602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:05:33.379717 1970602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:05:33.379839 1970602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:05:33.379966 1970602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:05:33.380043 1970602 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:05:33.380129 1970602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:05:33.380198 1970602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:05:33.380268 1970602 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 14:05:33.380343 1970602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:05:33.380413 1970602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:05:33.380471 1970602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:05:33.380560 1970602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:05:33.380637 1970602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:05:33.382317 1970602 out.go:235]   - Booting up control plane ...
	I0120 14:05:33.382425 1970602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:05:33.382512 1970602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:05:33.382596 1970602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:05:33.382747 1970602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:05:33.382857 1970602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:05:33.382912 1970602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:05:33.383102 1970602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 14:05:33.383280 1970602 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 14:05:33.383370 1970602 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.354939ms
	I0120 14:05:33.383469 1970602 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 14:05:33.383558 1970602 kubeadm.go:310] [api-check] The API server is healthy after 5.504896351s
	I0120 14:05:33.383728 1970602 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 14:05:33.383925 1970602 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 14:05:33.384013 1970602 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 14:05:33.384335 1970602 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-647109 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 14:05:33.384423 1970602 kubeadm.go:310] [bootstrap-token] Using token: lua4mv.z68od0ysi19pmefo
	I0120 14:05:33.386221 1970602 out.go:235]   - Configuring RBAC rules ...
	I0120 14:05:33.386365 1970602 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 14:05:33.386446 1970602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 14:05:33.386593 1970602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 14:05:33.386761 1970602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 14:05:33.386926 1970602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 14:05:33.387058 1970602 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 14:05:33.387208 1970602 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 14:05:33.387276 1970602 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 14:05:33.387343 1970602 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 14:05:33.387355 1970602 kubeadm.go:310] 
	I0120 14:05:33.387441 1970602 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 14:05:33.387450 1970602 kubeadm.go:310] 
	I0120 14:05:33.387576 1970602 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 14:05:33.387589 1970602 kubeadm.go:310] 
	I0120 14:05:33.387627 1970602 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 14:05:33.387678 1970602 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 14:05:33.387738 1970602 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 14:05:33.387748 1970602 kubeadm.go:310] 
	I0120 14:05:33.387843 1970602 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 14:05:33.387853 1970602 kubeadm.go:310] 
	I0120 14:05:33.387930 1970602 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 14:05:33.387939 1970602 kubeadm.go:310] 
	I0120 14:05:33.388012 1970602 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 14:05:33.388091 1970602 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 14:05:33.388156 1970602 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 14:05:33.388160 1970602 kubeadm.go:310] 
	I0120 14:05:33.388249 1970602 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 14:05:33.388325 1970602 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 14:05:33.388332 1970602 kubeadm.go:310] 
	I0120 14:05:33.388404 1970602 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lua4mv.z68od0ysi19pmefo \
	I0120 14:05:33.388491 1970602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 \
	I0120 14:05:33.388524 1970602 kubeadm.go:310] 	--control-plane 
	I0120 14:05:33.388531 1970602 kubeadm.go:310] 
	I0120 14:05:33.388617 1970602 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 14:05:33.388625 1970602 kubeadm.go:310] 
	I0120 14:05:33.388736 1970602 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lua4mv.z68od0ysi19pmefo \
	I0120 14:05:33.388834 1970602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 
	I0120 14:05:33.388846 1970602 cni.go:84] Creating CNI manager for ""
	I0120 14:05:33.388853 1970602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:05:33.390876 1970602 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:05:33.392513 1970602 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:05:33.407354 1970602 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:05:33.428824 1970602 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:05:33.428934 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:33.428977 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-647109 minikube.k8s.io/updated_at=2025_01_20T14_05_33_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=embed-certs-647109 minikube.k8s.io/primary=true
	I0120 14:05:33.473138 1970602 ops.go:34] apiserver oom_adj: -16
	I0120 14:05:33.718712 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:32.526764 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:35.026819 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:34.218762 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:34.719381 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:35.219746 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:35.718888 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:36.218775 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:36.718813 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:37.219353 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:37.393979 1970602 kubeadm.go:1113] duration metric: took 3.965125255s to wait for elevateKubeSystemPrivileges
	I0120 14:05:37.394019 1970602 kubeadm.go:394] duration metric: took 5m3.132880668s to StartCluster
	I0120 14:05:37.394048 1970602 settings.go:142] acquiring lock: {Name:mka51395421b81550a6a78e164d8aa21a9ede347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:05:37.394150 1970602 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:05:37.396378 1970602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:05:37.396706 1970602 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:05:37.396823 1970602 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:05:37.396933 1970602 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-647109"
	I0120 14:05:37.396953 1970602 config.go:182] Loaded profile config "embed-certs-647109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:05:37.396970 1970602 addons.go:69] Setting metrics-server=true in profile "embed-certs-647109"
	I0120 14:05:37.396980 1970602 addons.go:238] Setting addon metrics-server=true in "embed-certs-647109"
	W0120 14:05:37.396988 1970602 addons.go:247] addon metrics-server should already be in state true
	I0120 14:05:37.396987 1970602 addons.go:69] Setting default-storageclass=true in profile "embed-certs-647109"
	I0120 14:05:37.396953 1970602 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-647109"
	I0120 14:05:37.397011 1970602 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-647109"
	W0120 14:05:37.397012 1970602 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:05:37.397041 1970602 host.go:66] Checking if "embed-certs-647109" exists ...
	I0120 14:05:37.397044 1970602 host.go:66] Checking if "embed-certs-647109" exists ...
	I0120 14:05:37.397479 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.397483 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.397495 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.397519 1970602 addons.go:69] Setting dashboard=true in profile "embed-certs-647109"
	I0120 14:05:37.397526 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.397532 1970602 addons.go:238] Setting addon dashboard=true in "embed-certs-647109"
	W0120 14:05:37.397539 1970602 addons.go:247] addon dashboard should already be in state true
	I0120 14:05:37.397563 1970602 host.go:66] Checking if "embed-certs-647109" exists ...
	I0120 14:05:37.397606 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.397785 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.397855 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.397900 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.401795 1970602 out.go:177] * Verifying Kubernetes components...
	I0120 14:05:34.179481 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:34.195424 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:34.195496 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:34.236592 1971155 cri.go:89] found id: ""
	I0120 14:05:34.236623 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.236632 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:34.236639 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:34.236696 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:34.275803 1971155 cri.go:89] found id: ""
	I0120 14:05:34.275836 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.275848 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:34.275855 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:34.275944 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:34.315900 1971155 cri.go:89] found id: ""
	I0120 14:05:34.315932 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.315944 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:34.315952 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:34.316019 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:34.353614 1971155 cri.go:89] found id: ""
	I0120 14:05:34.353646 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.353655 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:34.353661 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:34.353735 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:34.395635 1971155 cri.go:89] found id: ""
	I0120 14:05:34.395673 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.395685 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:34.395698 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:34.395782 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:34.435631 1971155 cri.go:89] found id: ""
	I0120 14:05:34.435662 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.435672 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:34.435678 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:34.435742 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:34.474904 1971155 cri.go:89] found id: ""
	I0120 14:05:34.474940 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.474952 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:34.474960 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:34.475030 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:34.513643 1971155 cri.go:89] found id: ""
	I0120 14:05:34.513675 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.513688 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:34.513701 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:34.513719 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:34.531525 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:34.531559 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:34.614600 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:34.614649 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:34.614667 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:34.691236 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:34.691282 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:34.739567 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:34.739616 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:37.294798 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:37.313219 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:37.313309 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:37.360355 1971155 cri.go:89] found id: ""
	I0120 14:05:37.360392 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.360406 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:37.360415 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:37.360493 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:37.400427 1971155 cri.go:89] found id: ""
	I0120 14:05:37.400456 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.400466 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:37.400475 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:37.400535 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:37.403396 1970602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:05:37.419631 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0120 14:05:37.419631 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40515
	I0120 14:05:37.419751 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0120 14:05:37.420159 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.420340 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.420726 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.420753 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.420870 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.420883 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.421153 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.421286 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.421765 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.421807 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.421859 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.421907 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.423180 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.424356 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43445
	I0120 14:05:37.424853 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.427176 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.427218 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.431273 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.431306 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.431273 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.431590 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.431772 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.432414 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.432463 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.438218 1970602 addons.go:238] Setting addon default-storageclass=true in "embed-certs-647109"
	W0120 14:05:37.438363 1970602 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:05:37.438408 1970602 host.go:66] Checking if "embed-certs-647109" exists ...
	I0120 14:05:37.438859 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.439701 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.444146 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I0120 14:05:37.444576 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41243
	I0120 14:05:37.444773 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.444915 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.445334 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.445367 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.445548 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.445565 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.445846 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.445940 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.446010 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.446155 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.448263 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:05:37.448850 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:05:37.451121 1970602 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:05:37.451145 1970602 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:05:37.452901 1970602 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:05:37.452925 1970602 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:05:37.452946 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:05:37.453029 1970602 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:05:37.453046 1970602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:05:37.453066 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:05:37.457009 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.457306 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:05:37.457323 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.457535 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:05:37.457644 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.457758 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:05:37.457905 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:05:37.458015 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:05:37.458314 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:05:37.458329 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.458460 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:05:37.458637 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:05:37.458741 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:05:37.458835 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:05:37.465409 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44303
	I0120 14:05:37.466031 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.466695 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.466719 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.466964 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41927
	I0120 14:05:37.467498 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.467603 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.468062 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.468085 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.468561 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.468603 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.469079 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.469289 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.471308 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:05:37.473344 1970602 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:05:37.475133 1970602 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:05:37.476628 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:05:37.476660 1970602 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:05:37.476691 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:05:37.480284 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.480952 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:05:37.480993 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.481641 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:05:37.481944 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:05:37.482177 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:05:37.482403 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:05:37.509821 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I0120 14:05:37.510356 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.511017 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.511041 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.511533 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.511923 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.514239 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:05:37.514505 1970602 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:05:37.514525 1970602 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:05:37.514547 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:05:37.518318 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.518891 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:05:37.518919 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.519100 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:05:37.519331 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:05:37.519489 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:05:37.519722 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:05:37.741139 1970602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:05:37.799051 1970602 node_ready.go:35] waiting up to 6m0s for node "embed-certs-647109" to be "Ready" ...
	I0120 14:05:37.809096 1970602 node_ready.go:49] node "embed-certs-647109" has status "Ready":"True"
	I0120 14:05:37.809130 1970602 node_ready.go:38] duration metric: took 10.033158ms for node "embed-certs-647109" to be "Ready" ...
	I0120 14:05:37.809146 1970602 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:37.819590 1970602 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:37.940986 1970602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:05:37.994181 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:05:37.994215 1970602 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:05:38.057795 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:05:38.057828 1970602 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:05:38.074299 1970602 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:05:38.074328 1970602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:05:38.076399 1970602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:05:38.161099 1970602 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:05:38.161133 1970602 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:05:38.172032 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:05:38.172066 1970602 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:05:38.251253 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:05:38.251287 1970602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:05:38.267793 1970602 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:05:38.267823 1970602 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:05:38.300776 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:05:38.300806 1970602 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 14:05:38.438115 1970602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:05:38.438263 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:05:38.438293 1970602 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:05:38.469992 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:05:38.470026 1970602 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:05:38.488178 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:38.488209 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:38.488602 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:38.488624 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:38.488633 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:38.488642 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:38.488642 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:38.488915 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:38.488928 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:38.506460 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:38.506490 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:38.506908 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:38.506932 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:38.506952 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:38.535768 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:05:38.535801 1970602 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 14:05:38.588204 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:05:38.588244 1970602 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 14:05:38.641430 1970602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:05:37.532230 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:40.026877 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:39.322794 1970602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.24634872s)
	I0120 14:05:39.322872 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:39.322888 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:39.323266 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:39.323312 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:39.323332 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:39.323342 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:39.323351 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:39.323616 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:39.323623 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:39.323633 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:39.850519 1970602 pod_ready.go:103] pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:40.002690 1970602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.564518983s)
	I0120 14:05:40.002772 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:40.002791 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:40.003274 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:40.003336 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:40.003360 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:40.003372 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:40.003382 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:40.003762 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:40.003779 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:40.003791 1970602 addons.go:479] Verifying addon metrics-server=true in "embed-certs-647109"
	I0120 14:05:40.003823 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:40.923510 1970602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.282025528s)
	I0120 14:05:40.923577 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:40.923608 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:40.923936 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:40.923983 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:40.924000 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:40.924023 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:40.924034 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:40.924348 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:40.924369 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:40.924375 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:40.926492 1970602 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-647109 addons enable metrics-server
	
	I0120 14:05:40.928141 1970602 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 14:05:37.472778 1971155 cri.go:89] found id: ""
	I0120 14:05:37.472800 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.472807 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:37.472814 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:37.472861 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:37.514813 1971155 cri.go:89] found id: ""
	I0120 14:05:37.514836 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.514846 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:37.514853 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:37.514912 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:37.559689 1971155 cri.go:89] found id: ""
	I0120 14:05:37.559724 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.559735 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:37.559768 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:37.559851 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:37.604249 1971155 cri.go:89] found id: ""
	I0120 14:05:37.604279 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.604291 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:37.604299 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:37.604372 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:37.655652 1971155 cri.go:89] found id: ""
	I0120 14:05:37.655689 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.655702 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:37.655710 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:37.655780 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:37.699626 1971155 cri.go:89] found id: ""
	I0120 14:05:37.699663 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.699677 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:37.699690 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:37.699706 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:37.761041 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:37.761105 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:37.789894 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:37.789933 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:37.870389 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:37.870424 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:37.870444 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:37.953788 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:37.953828 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:40.507832 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:40.526389 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:40.526479 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:40.564969 1971155 cri.go:89] found id: ""
	I0120 14:05:40.565007 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.565019 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:40.565028 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:40.565102 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:40.610815 1971155 cri.go:89] found id: ""
	I0120 14:05:40.610851 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.610863 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:40.610879 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:40.610950 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:40.656202 1971155 cri.go:89] found id: ""
	I0120 14:05:40.656241 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.656253 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:40.656261 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:40.656332 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:40.696520 1971155 cri.go:89] found id: ""
	I0120 14:05:40.696555 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.696567 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:40.696576 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:40.696655 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:40.741177 1971155 cri.go:89] found id: ""
	I0120 14:05:40.741213 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.741224 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:40.741232 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:40.741321 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:40.787423 1971155 cri.go:89] found id: ""
	I0120 14:05:40.787463 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.787476 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:40.787486 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:40.787560 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:40.838180 1971155 cri.go:89] found id: ""
	I0120 14:05:40.838217 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.838227 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:40.838235 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:40.838308 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:40.877888 1971155 cri.go:89] found id: ""
	I0120 14:05:40.877922 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.877934 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:40.877947 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:40.877962 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:40.942664 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:40.942718 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:40.960105 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:40.960147 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:41.038583 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:41.038640 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:41.038660 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:41.125430 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:41.125499 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:40.930035 1970602 addons.go:514] duration metric: took 3.533222189s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 14:05:42.330147 1970602 pod_ready.go:103] pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:43.342012 1970602 pod_ready.go:93] pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.342038 1970602 pod_ready.go:82] duration metric: took 5.522419293s for pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.342050 1970602 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-ndv97" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.359479 1970602 pod_ready.go:93] pod "coredns-668d6bf9bc-ndv97" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.359506 1970602 pod_ready.go:82] duration metric: took 17.448444ms for pod "coredns-668d6bf9bc-ndv97" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.359518 1970602 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.403702 1970602 pod_ready.go:93] pod "etcd-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.403732 1970602 pod_ready.go:82] duration metric: took 44.20711ms for pod "etcd-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.403744 1970602 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.413596 1970602 pod_ready.go:93] pod "kube-apiserver-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.413623 1970602 pod_ready.go:82] duration metric: took 9.873022ms for pod "kube-apiserver-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.413634 1970602 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.421693 1970602 pod_ready.go:93] pod "kube-controller-manager-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.421718 1970602 pod_ready.go:82] duration metric: took 8.077458ms for pod "kube-controller-manager-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.421731 1970602 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-chhpt" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.724510 1970602 pod_ready.go:93] pod "kube-proxy-chhpt" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.724537 1970602 pod_ready.go:82] duration metric: took 302.799519ms for pod "kube-proxy-chhpt" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.724549 1970602 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:42.527349 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:45.026552 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:43.677350 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:43.695745 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:43.695838 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:43.746662 1971155 cri.go:89] found id: ""
	I0120 14:05:43.746695 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.746710 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:43.746718 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:43.746791 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:43.802111 1971155 cri.go:89] found id: ""
	I0120 14:05:43.802142 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.802154 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:43.802163 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:43.802234 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:43.849314 1971155 cri.go:89] found id: ""
	I0120 14:05:43.849351 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.849363 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:43.849371 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:43.849444 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:43.898242 1971155 cri.go:89] found id: ""
	I0120 14:05:43.898279 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.898293 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:43.898302 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:43.898384 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:43.939248 1971155 cri.go:89] found id: ""
	I0120 14:05:43.939282 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.939293 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:43.939302 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:43.939369 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:43.979271 1971155 cri.go:89] found id: ""
	I0120 14:05:43.979307 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.979327 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:43.979336 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:43.979408 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:44.016351 1971155 cri.go:89] found id: ""
	I0120 14:05:44.016387 1971155 logs.go:282] 0 containers: []
	W0120 14:05:44.016400 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:44.016409 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:44.016479 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:44.060965 1971155 cri.go:89] found id: ""
	I0120 14:05:44.061005 1971155 logs.go:282] 0 containers: []
	W0120 14:05:44.061017 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:44.061032 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:44.061050 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:44.076017 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:44.076070 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:44.159732 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:44.159761 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:44.159775 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:44.240721 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:44.240769 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:44.285018 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:44.285061 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:46.839125 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:46.856748 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:46.856841 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:46.908851 1971155 cri.go:89] found id: ""
	I0120 14:05:46.908886 1971155 logs.go:282] 0 containers: []
	W0120 14:05:46.908898 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:46.908909 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:46.908978 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:46.949810 1971155 cri.go:89] found id: ""
	I0120 14:05:46.949865 1971155 logs.go:282] 0 containers: []
	W0120 14:05:46.949879 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:46.949887 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:46.949969 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:46.995158 1971155 cri.go:89] found id: ""
	I0120 14:05:46.995191 1971155 logs.go:282] 0 containers: []
	W0120 14:05:46.995201 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:46.995212 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:46.995284 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:47.034872 1971155 cri.go:89] found id: ""
	I0120 14:05:47.034905 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.034916 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:47.034924 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:47.034993 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:47.077500 1971155 cri.go:89] found id: ""
	I0120 14:05:47.077529 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.077537 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:47.077544 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:47.077608 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:47.118996 1971155 cri.go:89] found id: ""
	I0120 14:05:47.119027 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.119048 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:47.119059 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:47.119126 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:47.159902 1971155 cri.go:89] found id: ""
	I0120 14:05:47.159931 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.159943 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:47.159952 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:47.160027 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:47.201895 1971155 cri.go:89] found id: ""
	I0120 14:05:47.201922 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.201930 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:47.201942 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:47.201957 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:47.244852 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:47.244888 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:47.297439 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:47.297486 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:47.313519 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:47.313558 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:47.389340 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:47.389372 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:47.389391 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:45.324683 1970602 pod_ready.go:93] pod "kube-scheduler-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:45.324712 1970602 pod_ready.go:82] duration metric: took 1.600155124s for pod "kube-scheduler-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:45.324723 1970602 pod_ready.go:39] duration metric: took 7.515564286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:45.324743 1970602 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:05:45.324813 1970602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:45.381331 1970602 api_server.go:72] duration metric: took 7.98457351s to wait for apiserver process to appear ...
	I0120 14:05:45.381368 1970602 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:05:45.381388 1970602 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0120 14:05:45.386523 1970602 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I0120 14:05:45.387477 1970602 api_server.go:141] control plane version: v1.32.0
	I0120 14:05:45.387504 1970602 api_server.go:131] duration metric: took 6.127764ms to wait for apiserver health ...
	I0120 14:05:45.387513 1970602 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:05:45.530457 1970602 system_pods.go:59] 9 kube-system pods found
	I0120 14:05:45.530502 1970602 system_pods.go:61] "coredns-668d6bf9bc-ndbzp" [d43c588e-6fc1-435b-9c9a-8b19201596ae] Running
	I0120 14:05:45.530510 1970602 system_pods.go:61] "coredns-668d6bf9bc-ndv97" [3298cf5d-5983-463b-8aca-792fa1d94241] Running
	I0120 14:05:45.530516 1970602 system_pods.go:61] "etcd-embed-certs-647109" [58f40005-bda9-4a38-8e2a-8e3f4a869c20] Running
	I0120 14:05:45.530521 1970602 system_pods.go:61] "kube-apiserver-embed-certs-647109" [8e188c16-1d56-4972-baf1-20d8dd10f440] Running
	I0120 14:05:45.530527 1970602 system_pods.go:61] "kube-controller-manager-embed-certs-647109" [691924af-9adb-4788-9104-0dcca6ee95f3] Running
	I0120 14:05:45.530532 1970602 system_pods.go:61] "kube-proxy-chhpt" [a0244020-668f-4700-85c2-9562f4d0c920] Running
	I0120 14:05:45.530537 1970602 system_pods.go:61] "kube-scheduler-embed-certs-647109" [6b42ab84-e4cb-4dc8-a4ad-e7da476ec3b2] Running
	I0120 14:05:45.530548 1970602 system_pods.go:61] "metrics-server-f79f97bbb-nqwxp" [68d39045-4c01-40a2-9e8f-0f7734838f0b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:05:45.530559 1970602 system_pods.go:61] "storage-provisioner" [8067c033-4ef4-4945-95b5-f4120df75f5c] Running
	I0120 14:05:45.530574 1970602 system_pods.go:74] duration metric: took 143.054434ms to wait for pod list to return data ...
	I0120 14:05:45.530587 1970602 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:05:45.727314 1970602 default_sa.go:45] found service account: "default"
	I0120 14:05:45.727359 1970602 default_sa.go:55] duration metric: took 196.759471ms for default service account to be created ...
	I0120 14:05:45.727373 1970602 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:05:45.927406 1970602 system_pods.go:87] 9 kube-system pods found
	I0120 14:05:47.027640 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:49.526205 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:49.969003 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:49.983821 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:49.983904 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:50.024496 1971155 cri.go:89] found id: ""
	I0120 14:05:50.024525 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.024536 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:50.024545 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:50.024611 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:50.066376 1971155 cri.go:89] found id: ""
	I0120 14:05:50.066408 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.066416 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:50.066423 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:50.066497 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:50.106918 1971155 cri.go:89] found id: ""
	I0120 14:05:50.107034 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.107055 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:50.107065 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:50.107154 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:50.154846 1971155 cri.go:89] found id: ""
	I0120 14:05:50.154940 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.154962 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:50.154981 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:50.155095 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:50.228177 1971155 cri.go:89] found id: ""
	I0120 14:05:50.228218 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.228238 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:50.228249 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:50.228334 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:50.294106 1971155 cri.go:89] found id: ""
	I0120 14:05:50.294145 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.294158 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:50.294167 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:50.294242 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:50.340312 1971155 cri.go:89] found id: ""
	I0120 14:05:50.340357 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.340368 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:50.340375 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:50.340450 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:50.384031 1971155 cri.go:89] found id: ""
	I0120 14:05:50.384070 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.384082 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:50.384095 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:50.384112 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:50.399361 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:50.399396 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:50.484820 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:50.484851 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:50.484868 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:50.594107 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:50.594171 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:50.647700 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:50.647740 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:51.527819 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:54.026000 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:53.213104 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:53.229463 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:53.229538 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:53.270860 1971155 cri.go:89] found id: ""
	I0120 14:05:53.270896 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.270909 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:53.270917 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:53.270977 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:53.311721 1971155 cri.go:89] found id: ""
	I0120 14:05:53.311748 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.311757 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:53.311764 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:53.311818 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:53.350019 1971155 cri.go:89] found id: ""
	I0120 14:05:53.350053 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.350064 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:53.350073 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:53.350144 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:53.386955 1971155 cri.go:89] found id: ""
	I0120 14:05:53.386982 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.386990 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:53.386996 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:53.387059 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:53.427056 1971155 cri.go:89] found id: ""
	I0120 14:05:53.427096 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.427105 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:53.427112 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:53.427170 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:53.468506 1971155 cri.go:89] found id: ""
	I0120 14:05:53.468546 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.468559 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:53.468568 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:53.468642 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:53.505884 1971155 cri.go:89] found id: ""
	I0120 14:05:53.505926 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.505938 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:53.505948 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:53.506024 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:53.547189 1971155 cri.go:89] found id: ""
	I0120 14:05:53.547232 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.547244 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:53.547258 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:53.547282 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:53.629525 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:53.629559 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:53.629577 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:53.711943 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:53.711994 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:53.761408 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:53.761442 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:53.815735 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:53.815781 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:56.332189 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:56.347525 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:56.347622 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:56.389104 1971155 cri.go:89] found id: ""
	I0120 14:05:56.389145 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.389156 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:56.389165 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:56.389244 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:56.427108 1971155 cri.go:89] found id: ""
	I0120 14:05:56.427151 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.427163 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:56.427173 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:56.427252 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:56.473424 1971155 cri.go:89] found id: ""
	I0120 14:05:56.473457 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.473469 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:56.473477 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:56.473560 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:56.513450 1971155 cri.go:89] found id: ""
	I0120 14:05:56.513485 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.513495 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:56.513504 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:56.513578 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:56.562482 1971155 cri.go:89] found id: ""
	I0120 14:05:56.562533 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.562546 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:56.562554 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:56.562652 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:56.604745 1971155 cri.go:89] found id: ""
	I0120 14:05:56.604776 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.604787 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:56.604795 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:56.604867 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:56.645202 1971155 cri.go:89] found id: ""
	I0120 14:05:56.645245 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.645259 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:56.645268 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:56.645343 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:56.686351 1971155 cri.go:89] found id: ""
	I0120 14:05:56.686379 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.686388 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:56.686405 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:56.686419 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:56.700157 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:56.700206 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:56.780260 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:56.780289 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:56.780306 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:56.859551 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:56.859590 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:56.900940 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:56.900970 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:56.027202 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:58.526277 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:00.527173 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:59.457051 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:59.472587 1971155 kubeadm.go:597] duration metric: took 4m3.227513478s to restartPrimaryControlPlane
	W0120 14:05:59.472685 1971155 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:05:59.472723 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:06:01.310474 1971155 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.837720995s)
	I0120 14:06:01.310572 1971155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:06:01.327408 1971155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:06:01.339235 1971155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:06:01.350183 1971155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:06:01.350209 1971155 kubeadm.go:157] found existing configuration files:
	
	I0120 14:06:01.350259 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:06:01.361183 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:06:01.361270 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:06:01.372352 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:06:01.382976 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:06:01.383040 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:06:01.394492 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:06:01.405628 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:06:01.405694 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:06:01.417040 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:06:01.428807 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:06:01.428872 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:06:01.441345 1971155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:06:01.698918 1971155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:06:03.025832 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:05.026627 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:07.027188 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:09.028290 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:11.031964 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:13.525789 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:15.526985 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:18.026476 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:20.027814 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:22.526030 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:24.526922 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:26.527440 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:28.528148 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:31.026333 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:33.527109 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:36.027336 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:38.526086 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:39.020400 1971324 pod_ready.go:82] duration metric: took 4m0.001084886s for pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace to be "Ready" ...
	E0120 14:06:39.020434 1971324 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 14:06:39.020464 1971324 pod_ready.go:39] duration metric: took 4m13.544546991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:06:39.020512 1971324 kubeadm.go:597] duration metric: took 4m20.388785998s to restartPrimaryControlPlane
	W0120 14:06:39.020594 1971324 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:06:39.020633 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:07:06.810143 1971324 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.789476664s)
	I0120 14:07:06.810247 1971324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:07:06.832457 1971324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:07:06.852749 1971324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:07:06.873857 1971324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:07:06.873882 1971324 kubeadm.go:157] found existing configuration files:
	
	I0120 14:07:06.873943 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0120 14:07:06.886791 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:07:06.886875 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:07:06.909304 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0120 14:07:06.925495 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:07:06.925578 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:07:06.946915 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0120 14:07:06.958045 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:07:06.958118 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:07:06.969792 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0120 14:07:06.980477 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:07:06.980546 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:07:06.992154 1971324 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:07:07.047808 1971324 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 14:07:07.048054 1971324 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:07:07.167444 1971324 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:07:07.167631 1971324 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:07:07.167755 1971324 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 14:07:07.176704 1971324 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:07:07.178906 1971324 out.go:235]   - Generating certificates and keys ...
	I0120 14:07:07.179018 1971324 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:07:07.179096 1971324 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:07:07.179214 1971324 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:07:07.179292 1971324 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:07:07.179407 1971324 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:07:07.179531 1971324 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:07:07.179632 1971324 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:07:07.179728 1971324 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:07:07.179830 1971324 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:07:07.179923 1971324 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:07:07.180006 1971324 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:07:07.180105 1971324 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:07:07.399949 1971324 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:07:07.525338 1971324 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 14:07:07.958528 1971324 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:07:08.085273 1971324 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:07:08.227675 1971324 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:07:08.228174 1971324 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:07:08.230880 1971324 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:07:08.232690 1971324 out.go:235]   - Booting up control plane ...
	I0120 14:07:08.232803 1971324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:07:08.232885 1971324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:07:08.233165 1971324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:07:08.255003 1971324 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:07:08.263855 1971324 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:07:08.263966 1971324 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:07:08.414539 1971324 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 14:07:08.414702 1971324 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 14:07:08.915282 1971324 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.191909ms
	I0120 14:07:08.915410 1971324 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 14:07:14.418359 1971324 kubeadm.go:310] [api-check] The API server is healthy after 5.50145508s
	I0120 14:07:14.430935 1971324 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 14:07:14.460608 1971324 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 14:07:14.497450 1971324 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 14:07:14.497787 1971324 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-727256 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 14:07:14.515343 1971324 kubeadm.go:310] [bootstrap-token] Using token: tkd27p.2n22jx81j70drifi
	I0120 14:07:14.516953 1971324 out.go:235]   - Configuring RBAC rules ...
	I0120 14:07:14.517145 1971324 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 14:07:14.535550 1971324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 14:07:14.549490 1971324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 14:07:14.554516 1971324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 14:07:14.559606 1971324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 14:07:14.567744 1971324 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 14:07:14.823696 1971324 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 14:07:15.255724 1971324 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 14:07:15.828061 1971324 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 14:07:15.829612 1971324 kubeadm.go:310] 
	I0120 14:07:15.829720 1971324 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 14:07:15.829734 1971324 kubeadm.go:310] 
	I0120 14:07:15.829934 1971324 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 14:07:15.829961 1971324 kubeadm.go:310] 
	I0120 14:07:15.829995 1971324 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 14:07:15.830134 1971324 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 14:07:15.830216 1971324 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 14:07:15.830238 1971324 kubeadm.go:310] 
	I0120 14:07:15.830300 1971324 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 14:07:15.830307 1971324 kubeadm.go:310] 
	I0120 14:07:15.830345 1971324 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 14:07:15.830351 1971324 kubeadm.go:310] 
	I0120 14:07:15.830452 1971324 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 14:07:15.830564 1971324 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 14:07:15.830687 1971324 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 14:07:15.830712 1971324 kubeadm.go:310] 
	I0120 14:07:15.830839 1971324 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 14:07:15.830917 1971324 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 14:07:15.830928 1971324 kubeadm.go:310] 
	I0120 14:07:15.831050 1971324 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token tkd27p.2n22jx81j70drifi \
	I0120 14:07:15.831203 1971324 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 \
	I0120 14:07:15.831236 1971324 kubeadm.go:310] 	--control-plane 
	I0120 14:07:15.831250 1971324 kubeadm.go:310] 
	I0120 14:07:15.831373 1971324 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 14:07:15.831381 1971324 kubeadm.go:310] 
	I0120 14:07:15.831510 1971324 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token tkd27p.2n22jx81j70drifi \
	I0120 14:07:15.831608 1971324 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 
	I0120 14:07:15.832608 1971324 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:07:15.832644 1971324 cni.go:84] Creating CNI manager for ""
	I0120 14:07:15.832665 1971324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:07:15.834574 1971324 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:07:15.836200 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:07:15.852486 1971324 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:07:15.883072 1971324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:07:15.883163 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:15.883217 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-727256 minikube.k8s.io/updated_at=2025_01_20T14_07_15_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=default-k8s-diff-port-727256 minikube.k8s.io/primary=true
	I0120 14:07:15.919057 1971324 ops.go:34] apiserver oom_adj: -16
	I0120 14:07:16.264800 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:16.765768 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:17.265700 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:17.765591 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:18.265120 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:18.765375 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:19.265828 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:19.765233 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:19.871124 1971324 kubeadm.go:1113] duration metric: took 3.988031359s to wait for elevateKubeSystemPrivileges
	I0120 14:07:19.871168 1971324 kubeadm.go:394] duration metric: took 5m1.294931591s to StartCluster
	I0120 14:07:19.871195 1971324 settings.go:142] acquiring lock: {Name:mka51395421b81550a6a78e164d8aa21a9ede347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:07:19.871308 1971324 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:07:19.872935 1971324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:07:19.873227 1971324 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:07:19.873360 1971324 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:07:19.873432 1971324 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-727256"
	I0120 14:07:19.873448 1971324 config.go:182] Loaded profile config "default-k8s-diff-port-727256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:07:19.873475 1971324 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-727256"
	I0120 14:07:19.873456 1971324 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-727256"
	W0120 14:07:19.873525 1971324 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:07:19.873515 1971324 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-727256"
	I0120 14:07:19.873512 1971324 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-727256"
	I0120 14:07:19.873579 1971324 host.go:66] Checking if "default-k8s-diff-port-727256" exists ...
	I0120 14:07:19.873591 1971324 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-727256"
	W0120 14:07:19.873602 1971324 addons.go:247] addon dashboard should already be in state true
	I0120 14:07:19.873461 1971324 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-727256"
	I0120 14:07:19.873645 1971324 host.go:66] Checking if "default-k8s-diff-port-727256" exists ...
	I0120 14:07:19.873644 1971324 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-727256"
	W0120 14:07:19.873658 1971324 addons.go:247] addon metrics-server should already be in state true
	I0120 14:07:19.873693 1971324 host.go:66] Checking if "default-k8s-diff-port-727256" exists ...
	I0120 14:07:19.873994 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.874028 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.874069 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.874104 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.874122 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.874160 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.874182 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.874249 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.875156 1971324 out.go:177] * Verifying Kubernetes components...
	I0120 14:07:19.877548 1971324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:07:19.894903 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41867
	I0120 14:07:19.895611 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I0120 14:07:19.895799 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
	I0120 14:07:19.895810 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46707
	I0120 14:07:19.896235 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.896371 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.896374 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.896427 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.896946 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.896965 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.897049 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.897061 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.897097 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.897109 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.897171 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.897179 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.897407 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.897504 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.897554 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.897763 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.897815 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.898170 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.898210 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.898503 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.898556 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.899598 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.899642 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.901013 1971324 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-727256"
	W0120 14:07:19.901024 1971324 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:07:19.901047 1971324 host.go:66] Checking if "default-k8s-diff-port-727256" exists ...
	I0120 14:07:19.901256 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.901294 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.921489 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
	I0120 14:07:19.922200 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.922354 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
	I0120 14:07:19.922487 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0120 14:07:19.923012 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.923115 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.923351 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.923371 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.923750 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.923773 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.923903 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.924012 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.924035 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.924227 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.925245 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.925523 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.926174 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.926409 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.926777 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0120 14:07:19.927338 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.927812 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:07:19.928588 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.928606 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.928749 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:07:19.928849 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:07:19.929144 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.929629 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.929667 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.930118 1971324 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:07:19.931197 1971324 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:07:19.931224 1971324 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:07:19.933008 1971324 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:07:19.933033 1971324 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:07:19.933058 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:07:19.933259 1971324 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:07:19.933369 1971324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:07:19.933389 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:07:19.933347 1971324 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:07:19.934800 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:07:19.934818 1971324 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:07:19.934847 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:07:19.937550 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.937957 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:07:19.937999 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.938124 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:07:19.938295 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:07:19.938406 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:07:19.938486 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:07:19.938817 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.940255 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.940625 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:07:19.940648 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.940917 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:07:19.940993 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:07:19.941018 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.941159 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:07:19.941305 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:07:19.941350 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:07:19.941478 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:07:19.941512 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:07:19.941902 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:07:19.942284 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:07:19.948962 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34819
	I0120 14:07:19.949405 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.949966 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.949989 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.950388 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.950699 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.952288 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:07:19.952507 1971324 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:07:19.952523 1971324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:07:19.952542 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:07:19.956242 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.956713 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:07:19.956743 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.956859 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:07:19.957008 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:07:19.957169 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:07:19.957470 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:07:20.127114 1971324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:07:20.154612 1971324 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-727256" to be "Ready" ...
	I0120 14:07:20.192263 1971324 node_ready.go:49] node "default-k8s-diff-port-727256" has status "Ready":"True"
	I0120 14:07:20.192290 1971324 node_ready.go:38] duration metric: took 37.635597ms for node "default-k8s-diff-port-727256" to be "Ready" ...
	I0120 14:07:20.192301 1971324 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:07:20.213859 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:07:20.213892 1971324 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:07:20.231942 1971324 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:20.258778 1971324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:07:20.282980 1971324 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:07:20.283031 1971324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:07:20.283840 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:07:20.283868 1971324 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:07:20.313871 1971324 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:07:20.313902 1971324 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:07:20.343875 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:07:20.343906 1971324 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:07:20.366130 1971324 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:07:20.366161 1971324 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:07:20.377530 1971324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:07:20.391855 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:07:20.391890 1971324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:07:20.422771 1971324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:07:20.490042 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:07:20.490070 1971324 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 14:07:20.668552 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:20.668581 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:20.668941 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:20.669010 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:20.669026 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:20.669028 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:20.669036 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:20.669363 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:20.669390 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:20.675996 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:20.676026 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:20.676331 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:20.676388 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:20.676354 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:20.680026 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:07:20.680052 1971324 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:07:20.807657 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:07:20.807698 1971324 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:07:20.876039 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:07:20.876068 1971324 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 14:07:20.999452 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:07:20.999483 1971324 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 14:07:21.023485 1971324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:07:21.643979 1971324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266406433s)
	I0120 14:07:21.644056 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:21.644071 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:21.644447 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:21.644477 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:21.644506 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:21.644521 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:21.644831 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:21.644845 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:22.256978 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace has status "Ready":"False"
	I0120 14:07:22.324244 1971324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.901426994s)
	I0120 14:07:22.324341 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:22.324361 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:22.324787 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:22.324849 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:22.324866 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:22.324875 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:22.324883 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:22.325248 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:22.325278 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:22.325285 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:22.325302 1971324 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-727256"
	I0120 14:07:23.339621 1971324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.316057578s)
	I0120 14:07:23.339712 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:23.339732 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:23.340118 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:23.340201 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:23.340216 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:23.340225 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:23.340136 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:23.340517 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:23.342106 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:23.342125 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:23.343861 1971324 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-727256 addons enable metrics-server
	
	I0120 14:07:23.345414 1971324 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 14:07:23.346269 1971324 addons.go:514] duration metric: took 3.472914176s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 14:07:24.739396 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace has status "Ready":"False"
	I0120 14:07:26.739597 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace has status "Ready":"False"
	I0120 14:07:27.738986 1971324 pod_ready.go:93] pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.739017 1971324 pod_ready.go:82] duration metric: took 7.507037469s for pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.739032 1971324 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-v22vm" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.745501 1971324 pod_ready.go:93] pod "coredns-668d6bf9bc-v22vm" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.745528 1971324 pod_ready.go:82] duration metric: took 6.487852ms for pod "coredns-668d6bf9bc-v22vm" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.745540 1971324 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.750780 1971324 pod_ready.go:93] pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.750815 1971324 pod_ready.go:82] duration metric: took 5.263354ms for pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.750829 1971324 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.757357 1971324 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.757387 1971324 pod_ready.go:82] duration metric: took 6.549516ms for pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.757400 1971324 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.763302 1971324 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.763332 1971324 pod_ready.go:82] duration metric: took 5.92298ms for pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.763347 1971324 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6vtjs" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:28.139358 1971324 pod_ready.go:93] pod "kube-proxy-6vtjs" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:28.139385 1971324 pod_ready.go:82] duration metric: took 376.030461ms for pod "kube-proxy-6vtjs" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:28.139395 1971324 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:28.536558 1971324 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:28.536595 1971324 pod_ready.go:82] duration metric: took 397.192361ms for pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:28.536609 1971324 pod_ready.go:39] duration metric: took 8.344296802s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:07:28.536633 1971324 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:07:28.536700 1971324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:07:28.573027 1971324 api_server.go:72] duration metric: took 8.699758175s to wait for apiserver process to appear ...
	I0120 14:07:28.573068 1971324 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:07:28.573101 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:07:28.578383 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0120 14:07:28.579376 1971324 api_server.go:141] control plane version: v1.32.0
	I0120 14:07:28.579402 1971324 api_server.go:131] duration metric: took 6.325441ms to wait for apiserver health ...
	I0120 14:07:28.579413 1971324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:07:28.743059 1971324 system_pods.go:59] 9 kube-system pods found
	I0120 14:07:28.743094 1971324 system_pods.go:61] "coredns-668d6bf9bc-l4rmh" [06f4698d-c393-4f30-b8de-77ade02b575e] Running
	I0120 14:07:28.743100 1971324 system_pods.go:61] "coredns-668d6bf9bc-v22vm" [95644362-4ab9-405f-b433-5b384ab083d1] Running
	I0120 14:07:28.743104 1971324 system_pods.go:61] "etcd-default-k8s-diff-port-727256" [888345c9-ff71-44eb-9501-6a878f6e7fce] Running
	I0120 14:07:28.743108 1971324 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-727256" [2c11d7e2-9f34-4861-977b-7559572c5eb9] Running
	I0120 14:07:28.743111 1971324 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-727256" [f6202668-dca8-46a8-9ac2-d58b96bda552] Running
	I0120 14:07:28.743115 1971324 system_pods.go:61] "kube-proxy-6vtjs" [d57cfd3b-d6bd-4e61-a606-b2451a3768ca] Running
	I0120 14:07:28.743118 1971324 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-727256" [764e1f75-6402-4ce2-9d44-5d8af5dbb0e8] Running
	I0120 14:07:28.743124 1971324 system_pods.go:61] "metrics-server-f79f97bbb-kp5hl" [190513f9-3e9f-4705-ae23-9481987802f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:07:28.743129 1971324 system_pods.go:61] "storage-provisioner" [0f716b6a-f5d2-49a0-a810-e0cdf72a3020] Running
	I0120 14:07:28.743136 1971324 system_pods.go:74] duration metric: took 163.71699ms to wait for pod list to return data ...
	I0120 14:07:28.743145 1971324 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:07:28.937247 1971324 default_sa.go:45] found service account: "default"
	I0120 14:07:28.937280 1971324 default_sa.go:55] duration metric: took 194.12949ms for default service account to be created ...
	I0120 14:07:28.937291 1971324 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:07:29.391088 1971324 system_pods.go:87] 9 kube-system pods found
	I0120 14:07:57.893064 1971155 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 14:07:57.893206 1971155 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 14:07:57.895047 1971155 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 14:07:57.895110 1971155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:07:57.895204 1971155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:07:57.895358 1971155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:07:57.895455 1971155 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 14:07:57.895510 1971155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:07:57.897667 1971155 out.go:235]   - Generating certificates and keys ...
	I0120 14:07:57.897769 1971155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:07:57.897859 1971155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:07:57.897979 1971155 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:07:57.898089 1971155 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:07:57.898184 1971155 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:07:57.898261 1971155 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:07:57.898370 1971155 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:07:57.898473 1971155 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:07:57.898549 1971155 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:07:57.898650 1971155 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:07:57.898706 1971155 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:07:57.898808 1971155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:07:57.898866 1971155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:07:57.898917 1971155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:07:57.898971 1971155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:07:57.899018 1971155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:07:57.899156 1971155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:07:57.899270 1971155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:07:57.899322 1971155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:07:57.899385 1971155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:07:57.900907 1971155 out.go:235]   - Booting up control plane ...
	I0120 14:07:57.901012 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:07:57.901098 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:07:57.901183 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:07:57.901301 1971155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:07:57.901498 1971155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 14:07:57.901549 1971155 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 14:07:57.901614 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.901802 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.901862 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.902008 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.902071 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.902248 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.902332 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.902476 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.902532 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.902723 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.902740 1971155 kubeadm.go:310] 
	I0120 14:07:57.902798 1971155 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 14:07:57.902913 1971155 kubeadm.go:310] 		timed out waiting for the condition
	I0120 14:07:57.902942 1971155 kubeadm.go:310] 
	I0120 14:07:57.902990 1971155 kubeadm.go:310] 	This error is likely caused by:
	I0120 14:07:57.903050 1971155 kubeadm.go:310] 		- The kubelet is not running
	I0120 14:07:57.903175 1971155 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 14:07:57.903185 1971155 kubeadm.go:310] 
	I0120 14:07:57.903309 1971155 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 14:07:57.903358 1971155 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 14:07:57.903406 1971155 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 14:07:57.903415 1971155 kubeadm.go:310] 
	I0120 14:07:57.903535 1971155 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 14:07:57.903608 1971155 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 14:07:57.903614 1971155 kubeadm.go:310] 
	I0120 14:07:57.903742 1971155 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 14:07:57.903828 1971155 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 14:07:57.903894 1971155 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 14:07:57.903959 1971155 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 14:07:57.903970 1971155 kubeadm.go:310] 
	W0120 14:07:57.904147 1971155 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0120 14:07:57.904205 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:07:58.379343 1971155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:07:58.394094 1971155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:07:58.405184 1971155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:07:58.405214 1971155 kubeadm.go:157] found existing configuration files:
	
	I0120 14:07:58.405275 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:07:58.415126 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:07:58.415190 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:07:58.425525 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:07:58.435286 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:07:58.435402 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:07:58.445346 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:07:58.455338 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:07:58.455400 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:07:58.465346 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:07:58.474739 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:07:58.474821 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:07:58.484664 1971155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:07:58.559434 1971155 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 14:07:58.559546 1971155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:07:58.713832 1971155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:07:58.713978 1971155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:07:58.714110 1971155 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 14:07:58.902142 1971155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:07:58.904151 1971155 out.go:235]   - Generating certificates and keys ...
	I0120 14:07:58.904252 1971155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:07:58.904340 1971155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:07:58.904451 1971155 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:07:58.904532 1971155 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:07:58.904662 1971155 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:07:58.904752 1971155 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:07:58.904850 1971155 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:07:58.904938 1971155 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:07:58.905078 1971155 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:07:58.905203 1971155 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:07:58.905255 1971155 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:07:58.905311 1971155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:07:59.059284 1971155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:07:59.367307 1971155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:07:59.478773 1971155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:07:59.769599 1971155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:07:59.795017 1971155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:07:59.796387 1971155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:07:59.796440 1971155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:07:59.967182 1971155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:07:59.969049 1971155 out.go:235]   - Booting up control plane ...
	I0120 14:07:59.969210 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:07:59.969498 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:07:59.978995 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:07:59.980298 1971155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:07:59.983629 1971155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 14:08:39.986873 1971155 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 14:08:39.986972 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:08:39.987222 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:08:44.987592 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:08:44.987868 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:08:54.988530 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:08:54.988725 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:09:14.990244 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:09:14.990492 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:09:54.990993 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:09:54.991340 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:09:54.991370 1971155 kubeadm.go:310] 
	I0120 14:09:54.991419 1971155 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 14:09:54.991474 1971155 kubeadm.go:310] 		timed out waiting for the condition
	I0120 14:09:54.991485 1971155 kubeadm.go:310] 
	I0120 14:09:54.991536 1971155 kubeadm.go:310] 	This error is likely caused by:
	I0120 14:09:54.991582 1971155 kubeadm.go:310] 		- The kubelet is not running
	I0120 14:09:54.991734 1971155 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 14:09:54.991760 1971155 kubeadm.go:310] 
	I0120 14:09:54.991930 1971155 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 14:09:54.991981 1971155 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 14:09:54.992034 1971155 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 14:09:54.992065 1971155 kubeadm.go:310] 
	I0120 14:09:54.992234 1971155 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 14:09:54.992326 1971155 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 14:09:54.992342 1971155 kubeadm.go:310] 
	I0120 14:09:54.992508 1971155 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 14:09:54.992650 1971155 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 14:09:54.992786 1971155 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 14:09:54.992894 1971155 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 14:09:54.992904 1971155 kubeadm.go:310] 
	I0120 14:09:54.994025 1971155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:09:54.994123 1971155 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 14:09:54.994214 1971155 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 14:09:54.994325 1971155 kubeadm.go:394] duration metric: took 7m58.806679255s to StartCluster
	I0120 14:09:54.994398 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:09:54.994475 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:09:55.044299 1971155 cri.go:89] found id: ""
	I0120 14:09:55.044338 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.044350 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:09:55.044359 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:09:55.044434 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:09:55.088726 1971155 cri.go:89] found id: ""
	I0120 14:09:55.088759 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.088767 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:09:55.088774 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:09:55.088848 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:09:55.127484 1971155 cri.go:89] found id: ""
	I0120 14:09:55.127513 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.127523 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:09:55.127531 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:09:55.127602 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:09:55.167042 1971155 cri.go:89] found id: ""
	I0120 14:09:55.167079 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.167091 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:09:55.167100 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:09:55.167173 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:09:55.206075 1971155 cri.go:89] found id: ""
	I0120 14:09:55.206111 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.206122 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:09:55.206128 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:09:55.206184 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:09:55.262849 1971155 cri.go:89] found id: ""
	I0120 14:09:55.262895 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.262907 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:09:55.262917 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:09:55.262989 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:09:55.303064 1971155 cri.go:89] found id: ""
	I0120 14:09:55.303102 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.303114 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:09:55.303122 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:09:55.303190 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:09:55.339202 1971155 cri.go:89] found id: ""
	I0120 14:09:55.339237 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.339248 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:09:55.339262 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:09:55.339279 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:09:55.425991 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:09:55.426022 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:09:55.426042 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:09:55.529413 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:09:55.529454 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:09:55.574927 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:09:55.574965 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:09:55.631464 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:09:55.631507 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0120 14:09:55.647055 1971155 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0120 14:09:55.647121 1971155 out.go:270] * 
	W0120 14:09:55.647197 1971155 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 14:09:55.647230 1971155 out.go:270] * 
	W0120 14:09:55.648431 1971155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 14:09:55.652120 1971155 out.go:201] 
	W0120 14:09:55.653811 1971155 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 14:09:55.653880 1971155 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0120 14:09:55.653909 1971155 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0120 14:09:55.655598 1971155 out.go:201] 
	
	
	==> CRI-O <==
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.195666821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737382197195595078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a207331-4733-4710-af01-f7dc13c45063 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.196232218Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67ded52a-7358-4993-bbaf-69a96cf19342 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.196285759Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67ded52a-7358-4993-bbaf-69a96cf19342 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.196316323Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=67ded52a-7358-4993-bbaf-69a96cf19342 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.236072571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98917f5d-7270-4f62-96aa-e17d9fdab7a0 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.236148694Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98917f5d-7270-4f62-96aa-e17d9fdab7a0 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.237593226Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29a48d28-e377-47b0-b3d5-e5ee0e0d2f12 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.238112884Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737382197238088881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29a48d28-e377-47b0-b3d5-e5ee0e0d2f12 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.238856398Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca5893d6-252d-407a-939b-1b956a7737d4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.238922609Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca5893d6-252d-407a-939b-1b956a7737d4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.238972711Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ca5893d6-252d-407a-939b-1b956a7737d4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.273907688Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=980b00a7-79ba-40a1-aca3-a0bda6afd18f name=/runtime.v1.RuntimeService/Version
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.273980718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=980b00a7-79ba-40a1-aca3-a0bda6afd18f name=/runtime.v1.RuntimeService/Version
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.275108270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2443e8c2-4591-4339-8464-178f60bcfc17 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.275518890Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737382197275494175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2443e8c2-4591-4339-8464-178f60bcfc17 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.276345390Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f3cd6f2-be50-4303-87a9-7f333d71a734 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.276450766Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f3cd6f2-be50-4303-87a9-7f333d71a734 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.276516879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1f3cd6f2-be50-4303-87a9-7f333d71a734 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.309889722Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=834aa1cd-cc93-4ad6-89b9-2a52e4947720 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.309962124Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=834aa1cd-cc93-4ad6-89b9-2a52e4947720 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.311358942Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9547af5-25ce-466f-8742-9b7aadc5453a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.311863079Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737382197311820232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9547af5-25ce-466f-8742-9b7aadc5453a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.312518221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18387ffa-9974-4092-9705-28f48dfd533a name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.312566493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18387ffa-9974-4092-9705-28f48dfd533a name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:09:57 old-k8s-version-191446 crio[632]: time="2025-01-20 14:09:57.312597869Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=18387ffa-9974-4092-9705-28f48dfd533a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan20 14:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057095] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044027] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.065658] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.960481] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.662559] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.919769] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.062908] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080713] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.237108] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.143710] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.284512] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.705620] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +0.060994] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.033318] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[Jan20 14:02] kauditd_printk_skb: 46 callbacks suppressed
	[Jan20 14:06] systemd-fstab-generator[5027]: Ignoring "noauto" option for root device
	[Jan20 14:07] systemd-fstab-generator[5311]: Ignoring "noauto" option for root device
	[  +0.070549] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:09:57 up 8 min,  0 users,  load average: 0.02, 0.18, 0.10
	Linux old-k8s-version-191446 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 20 14:09:55 old-k8s-version-191446 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0000a60c0, 0xc000a3eb40)
	Jan 20 14:09:55 old-k8s-version-191446 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jan 20 14:09:55 old-k8s-version-191446 kubelet[5489]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jan 20 14:09:55 old-k8s-version-191446 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jan 20 14:09:55 old-k8s-version-191446 kubelet[5489]: goroutine 163 [select]:
	Jan 20 14:09:55 old-k8s-version-191446 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c05ef0, 0x4f0ac20, 0xc000cf4f50, 0x1, 0xc0000a60c0)
	Jan 20 14:09:55 old-k8s-version-191446 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jan 20 14:09:55 old-k8s-version-191446 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000b9c000, 0xc0000a60c0)
	Jan 20 14:09:55 old-k8s-version-191446 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jan 20 14:09:55 old-k8s-version-191446 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jan 20 14:09:55 old-k8s-version-191446 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jan 20 14:09:55 old-k8s-version-191446 kubelet[5489]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bea230, 0xc00031b7c0)
	Jan 20 14:09:55 old-k8s-version-191446 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jan 20 14:09:55 old-k8s-version-191446 kubelet[5489]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jan 20 14:09:55 old-k8s-version-191446 kubelet[5489]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jan 20 14:09:55 old-k8s-version-191446 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 20 14:09:55 old-k8s-version-191446 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 20 14:09:55 old-k8s-version-191446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jan 20 14:09:55 old-k8s-version-191446 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 20 14:09:55 old-k8s-version-191446 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 20 14:09:56 old-k8s-version-191446 kubelet[5554]: I0120 14:09:56.000075    5554 server.go:416] Version: v1.20.0
	Jan 20 14:09:56 old-k8s-version-191446 kubelet[5554]: I0120 14:09:56.000421    5554 server.go:837] Client rotation is on, will bootstrap in background
	Jan 20 14:09:56 old-k8s-version-191446 kubelet[5554]: I0120 14:09:56.003682    5554 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 20 14:09:56 old-k8s-version-191446 kubelet[5554]: W0120 14:09:56.004969    5554 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 20 14:09:56 old-k8s-version-191446 kubelet[5554]: I0120 14:09:56.005058    5554 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191446 -n old-k8s-version-191446
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191446 -n old-k8s-version-191446: exit status 2 (259.521544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-191446" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (511.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1618.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-727256 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E0120 14:02:36.597599 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:03:59.682241 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:06:26.556965 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:07:36.597861 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-727256 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: signal: killed (26m55.154501398s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-727256] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "default-k8s-diff-port-727256" primary control-plane node in "default-k8s-diff-port-727256" cluster
	* Restarting existing kvm2 VM for "default-k8s-diff-port-727256" ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-727256 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 14:01:30.648649 1971324 out.go:345] Setting OutFile to fd 1 ...
	I0120 14:01:30.648768 1971324 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:01:30.648777 1971324 out.go:358] Setting ErrFile to fd 2...
	I0120 14:01:30.648781 1971324 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:01:30.648971 1971324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 14:01:30.649563 1971324 out.go:352] Setting JSON to false
	I0120 14:01:30.650677 1971324 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":20637,"bootTime":1737361054,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 14:01:30.650808 1971324 start.go:139] virtualization: kvm guest
	I0120 14:01:30.653087 1971324 out.go:177] * [default-k8s-diff-port-727256] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 14:01:30.654902 1971324 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 14:01:30.654958 1971324 notify.go:220] Checking for updates...
	I0120 14:01:30.657200 1971324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 14:01:30.658358 1971324 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:01:30.659540 1971324 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 14:01:30.660755 1971324 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 14:01:30.662124 1971324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 14:01:30.664066 1971324 config.go:182] Loaded profile config "default-k8s-diff-port-727256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:01:30.664694 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:01:30.664783 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:01:30.683363 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46503
	I0120 14:01:30.684660 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:01:30.685421 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:01:30.685453 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:01:30.685849 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:01:30.686136 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:01:30.686482 1971324 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 14:01:30.686962 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:01:30.687017 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:01:30.705214 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33219
	I0120 14:01:30.705778 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:01:30.706464 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:01:30.706496 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:01:30.706910 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:01:30.707413 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:01:30.748140 1971324 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 14:01:30.749542 1971324 start.go:297] selected driver: kvm2
	I0120 14:01:30.749575 1971324 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-727256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8
s-diff-port-727256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:01:30.749732 1971324 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 14:01:30.750471 1971324 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:01:30.750569 1971324 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-1920423/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 14:01:30.769419 1971324 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 14:01:30.769920 1971324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 14:01:30.769962 1971324 cni.go:84] Creating CNI manager for ""
	I0120 14:01:30.770026 1971324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:01:30.770087 1971324 start.go:340] cluster config:
	{Name:default-k8s-diff-port-727256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-727256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:01:30.770203 1971324 iso.go:125] acquiring lock: {Name:mk18c81b0efa5cc8efe9d47dc52684752f206ae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:01:30.772094 1971324 out.go:177] * Starting "default-k8s-diff-port-727256" primary control-plane node in "default-k8s-diff-port-727256" cluster
	I0120 14:01:30.773441 1971324 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 14:01:30.773503 1971324 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 14:01:30.773514 1971324 cache.go:56] Caching tarball of preloaded images
	I0120 14:01:30.773638 1971324 preload.go:172] Found /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 14:01:30.773650 1971324 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 14:01:30.773755 1971324 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/config.json ...
	I0120 14:01:30.774002 1971324 start.go:360] acquireMachinesLock for default-k8s-diff-port-727256: {Name:mkbf148c6fec0c722eed081be3cef9d0990100c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 14:01:47.480212 1971324 start.go:364] duration metric: took 16.706172443s to acquireMachinesLock for "default-k8s-diff-port-727256"
	I0120 14:01:47.480300 1971324 start.go:96] Skipping create...Using existing machine configuration
	I0120 14:01:47.480313 1971324 fix.go:54] fixHost starting: 
	I0120 14:01:47.480706 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:01:47.480762 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:01:47.499438 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0120 14:01:47.499966 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:01:47.500523 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:01:47.500551 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:01:47.501028 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:01:47.501254 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:01:47.501413 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:01:47.503562 1971324 fix.go:112] recreateIfNeeded on default-k8s-diff-port-727256: state=Stopped err=<nil>
	I0120 14:01:47.503596 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	W0120 14:01:47.503774 1971324 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 14:01:47.505539 1971324 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-727256" ...
	I0120 14:01:47.506801 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Start
	I0120 14:01:47.507007 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) starting domain...
	I0120 14:01:47.507037 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) ensuring networks are active...
	I0120 14:01:47.507737 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Ensuring network default is active
	I0120 14:01:47.508168 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Ensuring network mk-default-k8s-diff-port-727256 is active
	I0120 14:01:47.508707 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) getting domain XML...
	I0120 14:01:47.509515 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) creating domain...
	I0120 14:01:48.889668 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) waiting for IP...
	I0120 14:01:48.890857 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:48.891526 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:48.891694 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:48.891527 1971420 retry.go:31] will retry after 199.178216ms: waiting for domain to come up
	I0120 14:01:49.092132 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.092672 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.092706 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:49.092636 1971420 retry.go:31] will retry after 255.633273ms: waiting for domain to come up
	I0120 14:01:49.350430 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.351160 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.351194 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:49.351128 1971420 retry.go:31] will retry after 428.048868ms: waiting for domain to come up
	I0120 14:01:49.781110 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.781882 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.781964 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:49.781864 1971420 retry.go:31] will retry after 580.304151ms: waiting for domain to come up
	I0120 14:01:50.363965 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:50.364533 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:50.364559 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:50.364529 1971420 retry.go:31] will retry after 531.332191ms: waiting for domain to come up
	I0120 14:01:50.897267 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:50.897845 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:50.897880 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:50.897808 1971420 retry.go:31] will retry after 772.118387ms: waiting for domain to come up
	I0120 14:01:51.671806 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:51.672432 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:51.672466 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:51.672381 1971420 retry.go:31] will retry after 1.060623833s: waiting for domain to come up
	I0120 14:01:52.735398 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:52.735986 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:52.736018 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:52.735943 1971420 retry.go:31] will retry after 1.002731806s: waiting for domain to come up
	I0120 14:01:53.740048 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:53.740702 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:53.740731 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:53.740659 1971420 retry.go:31] will retry after 1.680491712s: waiting for domain to come up
	I0120 14:01:55.423577 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:55.424100 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:55.424135 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:55.424031 1971420 retry.go:31] will retry after 1.794880075s: waiting for domain to come up
	I0120 14:01:57.220139 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:57.220694 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:57.220723 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:57.220656 1971420 retry.go:31] will retry after 2.261913004s: waiting for domain to come up
	I0120 14:01:59.484214 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:59.484791 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:59.484820 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:59.484718 1971420 retry.go:31] will retry after 2.630282337s: waiting for domain to come up
	I0120 14:02:02.116624 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:02.117129 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:02:02.117163 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:02:02.117089 1971420 retry.go:31] will retry after 3.120909651s: waiting for domain to come up
	I0120 14:02:05.239389 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:05.239901 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:02:05.239953 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:02:05.239877 1971420 retry.go:31] will retry after 4.391800801s: waiting for domain to come up
	I0120 14:02:09.634193 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.634637 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has current primary IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.634659 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) found domain IP: 192.168.72.104
	I0120 14:02:09.634684 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) reserving static IP address...
	I0120 14:02:09.635059 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) reserved static IP address 192.168.72.104 for domain default-k8s-diff-port-727256
	I0120 14:02:09.635098 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-727256", mac: "52:54:00:59:90:f7", ip: "192.168.72.104"} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.635109 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) waiting for SSH...
	I0120 14:02:09.635133 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | skip adding static IP to network mk-default-k8s-diff-port-727256 - found existing host DHCP lease matching {name: "default-k8s-diff-port-727256", mac: "52:54:00:59:90:f7", ip: "192.168.72.104"}
	I0120 14:02:09.635148 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Getting to WaitForSSH function...
	I0120 14:02:09.637199 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.637520 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.637554 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.637664 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Using SSH client type: external
	I0120 14:02:09.637695 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa (-rw-------)
	I0120 14:02:09.637761 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 14:02:09.637785 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | About to run SSH command:
	I0120 14:02:09.637834 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | exit 0
	I0120 14:02:09.763002 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | SSH cmd err, output: <nil>: 
	I0120 14:02:09.763410 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetConfigRaw
	I0120 14:02:09.764140 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetIP
	I0120 14:02:09.766862 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.767255 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.767309 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.767547 1971324 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/config.json ...
	I0120 14:02:09.767747 1971324 machine.go:93] provisionDockerMachine start ...
	I0120 14:02:09.767768 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:09.768084 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:09.770642 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.770978 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.771008 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.771159 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:09.771355 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:09.771522 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:09.771651 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:09.771843 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:09.772116 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:09.772135 1971324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 14:02:09.887277 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 14:02:09.887306 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetMachineName
	I0120 14:02:09.887607 1971324 buildroot.go:166] provisioning hostname "default-k8s-diff-port-727256"
	I0120 14:02:09.887644 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetMachineName
	I0120 14:02:09.887855 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:09.890533 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.890940 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.890972 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.891158 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:09.891363 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:09.891514 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:09.891625 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:09.891766 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:09.891982 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:09.891996 1971324 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-727256 && echo "default-k8s-diff-port-727256" | sudo tee /etc/hostname
	I0120 14:02:10.015326 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-727256
	
	I0120 14:02:10.015358 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:10.018488 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.018889 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.018920 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.019174 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:10.019397 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.019591 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.019775 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:10.019935 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:10.020121 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:10.020141 1971324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-727256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-727256/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-727256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 14:02:10.136552 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 14:02:10.136593 1971324 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 14:02:10.136631 1971324 buildroot.go:174] setting up certificates
	I0120 14:02:10.136653 1971324 provision.go:84] configureAuth start
	I0120 14:02:10.136667 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetMachineName
	I0120 14:02:10.137020 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetIP
	I0120 14:02:10.140046 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.140577 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.140627 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.140766 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:10.143806 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.144185 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.144220 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.144340 1971324 provision.go:143] copyHostCerts
	I0120 14:02:10.144408 1971324 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem, removing ...
	I0120 14:02:10.144433 1971324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem
	I0120 14:02:10.144518 1971324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 14:02:10.144663 1971324 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem, removing ...
	I0120 14:02:10.144675 1971324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem
	I0120 14:02:10.144716 1971324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 14:02:10.144827 1971324 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem, removing ...
	I0120 14:02:10.144838 1971324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem
	I0120 14:02:10.144865 1971324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 14:02:10.144958 1971324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-727256 san=[127.0.0.1 192.168.72.104 default-k8s-diff-port-727256 localhost minikube]
	I0120 14:02:10.704568 1971324 provision.go:177] copyRemoteCerts
	I0120 14:02:10.704642 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 14:02:10.704670 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:10.707581 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.707968 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.708005 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.708165 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:10.708406 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.708556 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:10.708705 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:02:10.798392 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 14:02:10.825489 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0120 14:02:10.851203 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 14:02:10.877144 1971324 provision.go:87] duration metric: took 740.469356ms to configureAuth
	I0120 14:02:10.877184 1971324 buildroot.go:189] setting minikube options for container-runtime
	I0120 14:02:10.877372 1971324 config.go:182] Loaded profile config "default-k8s-diff-port-727256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:02:10.877454 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:10.880681 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.881100 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.881135 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.881295 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:10.881487 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.881661 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.881824 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:10.881986 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:10.882152 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:10.882168 1971324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 14:02:11.118214 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 14:02:11.118246 1971324 machine.go:96] duration metric: took 1.350483814s to provisionDockerMachine
	I0120 14:02:11.118262 1971324 start.go:293] postStartSetup for "default-k8s-diff-port-727256" (driver="kvm2")
	I0120 14:02:11.118274 1971324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 14:02:11.118291 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.118662 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 14:02:11.118706 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:11.121765 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.122132 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.122160 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.122325 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:11.122539 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.122849 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:11.123019 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:02:11.205783 1971324 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 14:02:11.211240 1971324 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 14:02:11.211282 1971324 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 14:02:11.211389 1971324 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 14:02:11.211524 1971324 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem -> 19276722.pem in /etc/ssl/certs
	I0120 14:02:11.211679 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 14:02:11.222226 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:02:11.248964 1971324 start.go:296] duration metric: took 130.683064ms for postStartSetup
	I0120 14:02:11.249013 1971324 fix.go:56] duration metric: took 23.768701383s for fixHost
	I0120 14:02:11.249043 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:11.252350 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.252735 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.252784 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.253016 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:11.253244 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.253451 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.253587 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:11.253769 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:11.254003 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:11.254018 1971324 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 14:02:11.360027 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381731.321642168
	
	I0120 14:02:11.360058 1971324 fix.go:216] guest clock: 1737381731.321642168
	I0120 14:02:11.360067 1971324 fix.go:229] Guest: 2025-01-20 14:02:11.321642168 +0000 UTC Remote: 2025-01-20 14:02:11.249019145 +0000 UTC m=+40.644950772 (delta=72.623023ms)
	I0120 14:02:11.360095 1971324 fix.go:200] guest clock delta is within tolerance: 72.623023ms
	I0120 14:02:11.360110 1971324 start.go:83] releasing machines lock for "default-k8s-diff-port-727256", held for 23.8798308s
	I0120 14:02:11.360147 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.360474 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetIP
	I0120 14:02:11.363630 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.364131 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.364160 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.364441 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.365063 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.365248 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.365348 1971324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 14:02:11.365404 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:11.365419 1971324 ssh_runner.go:195] Run: cat /version.json
	I0120 14:02:11.365439 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:11.368411 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.368839 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.368879 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.368903 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.369109 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:11.369341 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.369383 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.369421 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.369557 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:11.369661 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:11.369746 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:02:11.369900 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.370094 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:11.370254 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:02:11.448584 1971324 ssh_runner.go:195] Run: systemctl --version
	I0120 14:02:11.476726 1971324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 14:02:11.630047 1971324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 14:02:11.636964 1971324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 14:02:11.637055 1971324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 14:02:11.654243 1971324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 14:02:11.654288 1971324 start.go:495] detecting cgroup driver to use...
	I0120 14:02:11.654363 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 14:02:11.671320 1971324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 14:02:11.687866 1971324 docker.go:217] disabling cri-docker service (if available) ...
	I0120 14:02:11.687931 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 14:02:11.703932 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 14:02:11.718827 1971324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 14:02:11.847210 1971324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 14:02:12.007623 1971324 docker.go:233] disabling docker service ...
	I0120 14:02:12.007698 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 14:02:12.024946 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 14:02:12.039357 1971324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 14:02:12.198785 1971324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 14:02:12.318653 1971324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 14:02:12.335226 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 14:02:12.356118 1971324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 14:02:12.356185 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.368853 1971324 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 14:02:12.368928 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.382590 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.395155 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.407707 1971324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 14:02:12.420260 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.432650 1971324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.451911 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.463708 1971324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 14:02:12.474047 1971324 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 14:02:12.474171 1971324 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 14:02:12.487873 1971324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 14:02:12.498585 1971324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:02:12.613685 1971324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 14:02:12.729768 1971324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 14:02:12.729875 1971324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 14:02:12.734978 1971324 start.go:563] Will wait 60s for crictl version
	I0120 14:02:12.735064 1971324 ssh_runner.go:195] Run: which crictl
	I0120 14:02:12.739280 1971324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 14:02:12.786678 1971324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 14:02:12.786793 1971324 ssh_runner.go:195] Run: crio --version
	I0120 14:02:12.817307 1971324 ssh_runner.go:195] Run: crio --version
	I0120 14:02:12.852593 1971324 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 14:02:12.853765 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetIP
	I0120 14:02:12.856623 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:12.857007 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:12.857053 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:12.857241 1971324 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 14:02:12.861728 1971324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:02:12.877000 1971324 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-727256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-727
256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 14:02:12.877127 1971324 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 14:02:12.877169 1971324 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:02:12.929986 1971324 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 14:02:12.930071 1971324 ssh_runner.go:195] Run: which lz4
	I0120 14:02:12.934799 1971324 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 14:02:12.939259 1971324 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 14:02:12.939291 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 14:02:15.168447 1971324 crio.go:462] duration metric: took 2.233676027s to copy over tarball
	I0120 14:02:15.168587 1971324 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 14:02:17.552550 1971324 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.383920665s)
	I0120 14:02:17.552588 1971324 crio.go:469] duration metric: took 2.38410161s to extract the tarball
	I0120 14:02:17.552598 1971324 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 14:02:17.595819 1971324 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:02:17.649094 1971324 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 14:02:17.649124 1971324 cache_images.go:84] Images are preloaded, skipping loading
	I0120 14:02:17.649135 1971324 kubeadm.go:934] updating node { 192.168.72.104 8444 v1.32.0 crio true true} ...
	I0120 14:02:17.649302 1971324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-727256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-727256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 14:02:17.649381 1971324 ssh_runner.go:195] Run: crio config
	I0120 14:02:17.704561 1971324 cni.go:84] Creating CNI manager for ""
	I0120 14:02:17.704586 1971324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:02:17.704598 1971324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 14:02:17.704619 1971324 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.104 APIServerPort:8444 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-727256 NodeName:default-k8s-diff-port-727256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 14:02:17.704750 1971324 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.104
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-727256"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.104"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.104"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 14:02:17.704816 1971324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 14:02:17.716061 1971324 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 14:02:17.716155 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 14:02:17.727801 1971324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0120 14:02:17.748166 1971324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 14:02:17.766985 1971324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0120 14:02:17.787650 1971324 ssh_runner.go:195] Run: grep 192.168.72.104	control-plane.minikube.internal$ /etc/hosts
	I0120 14:02:17.791993 1971324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:02:17.808216 1971324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:02:17.961542 1971324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:02:17.984203 1971324 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256 for IP: 192.168.72.104
	I0120 14:02:17.984233 1971324 certs.go:194] generating shared ca certs ...
	I0120 14:02:17.984291 1971324 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:02:17.984557 1971324 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 14:02:17.984648 1971324 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 14:02:17.984666 1971324 certs.go:256] generating profile certs ...
	I0120 14:02:17.984792 1971324 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.key
	I0120 14:02:17.984852 1971324 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/apiserver.key.23647750
	I0120 14:02:17.984912 1971324 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/proxy-client.key
	I0120 14:02:17.985077 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem (1338 bytes)
	W0120 14:02:17.985121 1971324 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672_empty.pem, impossibly tiny 0 bytes
	I0120 14:02:17.985133 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 14:02:17.985155 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 14:02:17.985178 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 14:02:17.985198 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 14:02:17.985256 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:02:17.985878 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 14:02:18.048719 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 14:02:18.112171 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 14:02:18.145094 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 14:02:18.177563 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0120 14:02:18.207741 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 14:02:18.238193 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 14:02:18.267493 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 14:02:18.299204 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 14:02:18.326722 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem --> /usr/share/ca-certificates/1927672.pem (1338 bytes)
	I0120 14:02:18.354365 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /usr/share/ca-certificates/19276722.pem (1708 bytes)
	I0120 14:02:18.387004 1971324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 14:02:18.407331 1971324 ssh_runner.go:195] Run: openssl version
	I0120 14:02:18.414499 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 14:02:18.428237 1971324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:02:18.433437 1971324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:02:18.433525 1971324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:02:18.440279 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 14:02:18.453372 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1927672.pem && ln -fs /usr/share/ca-certificates/1927672.pem /etc/ssl/certs/1927672.pem"
	I0120 14:02:18.466685 1971324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1927672.pem
	I0120 14:02:18.472158 1971324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:58 /usr/share/ca-certificates/1927672.pem
	I0120 14:02:18.472221 1971324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1927672.pem
	I0120 14:02:18.479048 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1927672.pem /etc/ssl/certs/51391683.0"
	I0120 14:02:18.492239 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19276722.pem && ln -fs /usr/share/ca-certificates/19276722.pem /etc/ssl/certs/19276722.pem"
	I0120 14:02:18.505538 1971324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19276722.pem
	I0120 14:02:18.511360 1971324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:58 /usr/share/ca-certificates/19276722.pem
	I0120 14:02:18.511449 1971324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19276722.pem
	I0120 14:02:18.518290 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19276722.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 14:02:18.531250 1971324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 14:02:18.536241 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 14:02:18.543115 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 14:02:18.549735 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 14:02:18.556016 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 14:02:18.563051 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 14:02:18.569460 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 14:02:18.576252 1971324 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-727256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-727256
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:02:18.576356 1971324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 14:02:18.576422 1971324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:02:18.620494 1971324 cri.go:89] found id: ""
	I0120 14:02:18.620569 1971324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 14:02:18.631697 1971324 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 14:02:18.631720 1971324 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 14:02:18.631768 1971324 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 14:02:18.642156 1971324 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 14:02:18.643051 1971324 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-727256" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:02:18.643528 1971324 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-1920423/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-727256" cluster setting kubeconfig missing "default-k8s-diff-port-727256" context setting]
	I0120 14:02:18.644170 1971324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:02:18.668914 1971324 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 14:02:18.683072 1971324 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.104
	I0120 14:02:18.683114 1971324 kubeadm.go:1160] stopping kube-system containers ...
	I0120 14:02:18.683129 1971324 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 14:02:18.683183 1971324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:02:18.729285 1971324 cri.go:89] found id: ""
	I0120 14:02:18.729378 1971324 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 14:02:18.747615 1971324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:02:18.760814 1971324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:02:18.760838 1971324 kubeadm.go:157] found existing configuration files:
	
	I0120 14:02:18.760894 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0120 14:02:18.770641 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:02:18.770724 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:02:18.781179 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0120 14:02:18.792949 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:02:18.793028 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:02:18.804366 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0120 14:02:18.815263 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:02:18.815346 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:02:18.825942 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0120 14:02:18.835903 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:02:18.835982 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:02:18.845972 1971324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:02:18.859961 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:19.003738 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:19.608160 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:19.849647 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:19.912750 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:20.009660 1971324 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:02:20.009754 1971324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:20.510534 1971324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:21.010159 1971324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:21.032056 1971324 api_server.go:72] duration metric: took 1.022395241s to wait for apiserver process to appear ...
	I0120 14:02:21.032096 1971324 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:02:21.032131 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:21.032697 1971324 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": dial tcp 192.168.72.104:8444: connect: connection refused
	I0120 14:02:21.532363 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:23.847330 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 14:02:23.847369 1971324 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 14:02:23.847385 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:23.877401 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 14:02:23.877441 1971324 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 14:02:24.032826 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:24.039566 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:02:24.039598 1971324 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:02:24.532837 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:24.539028 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:02:24.539067 1971324 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:02:25.032465 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:25.039986 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0120 14:02:25.049377 1971324 api_server.go:141] control plane version: v1.32.0
	I0120 14:02:25.049420 1971324 api_server.go:131] duration metric: took 4.017316014s to wait for apiserver health ...
	I0120 14:02:25.049433 1971324 cni.go:84] Creating CNI manager for ""
	I0120 14:02:25.049442 1971324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:02:25.051482 1971324 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:02:25.052855 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:02:25.066022 1971324 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:02:25.095180 1971324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:02:25.114905 1971324 system_pods.go:59] 8 kube-system pods found
	I0120 14:02:25.114960 1971324 system_pods.go:61] "coredns-668d6bf9bc-bz5qj" [d7374913-ed7c-42dc-a94f-44e1e2c757a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 14:02:25.114976 1971324 system_pods.go:61] "etcd-default-k8s-diff-port-727256" [1b7d5ec9-7630-4785-9c45-41ecdb748a8a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 14:02:25.114986 1971324 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-727256" [41957bec-6146-4451-a58e-80cfc0954d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 14:02:25.115001 1971324 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-727256" [700634af-068c-43a9-93fd-cb10680f5547] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0120 14:02:25.115015 1971324 system_pods.go:61] "kube-proxy-q48xh" [714b43b5-29d9-4ffb-a571-d319ac71ea64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0120 14:02:25.115023 1971324 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-727256" [37e3619f-2d6c-4ffd-a8a2-e9e935b79342] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0120 14:02:25.115037 1971324 system_pods.go:61] "metrics-server-f79f97bbb-wgptn" [c1255c51-78a3-4f21-a054-b7eec52e8021] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:02:25.115045 1971324 system_pods.go:61] "storage-provisioner" [f116e0d4-4c99-46b2-bb50-448d19e948da] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 14:02:25.115063 1971324 system_pods.go:74] duration metric: took 19.845736ms to wait for pod list to return data ...
	I0120 14:02:25.115078 1971324 node_conditions.go:102] verifying NodePressure condition ...
	I0120 14:02:25.140084 1971324 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 14:02:25.140127 1971324 node_conditions.go:123] node cpu capacity is 2
	I0120 14:02:25.140143 1971324 node_conditions.go:105] duration metric: took 25.059269ms to run NodePressure ...
	I0120 14:02:25.140170 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:25.471605 1971324 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0120 14:02:25.475871 1971324 kubeadm.go:739] kubelet initialised
	I0120 14:02:25.475897 1971324 kubeadm.go:740] duration metric: took 4.262299ms waiting for restarted kubelet to initialise ...
	I0120 14:02:25.475907 1971324 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:02:25.481730 1971324 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:27.488205 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:29.990080 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:31.992749 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:34.489038 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:35.989736 1971324 pod_ready.go:93] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:35.989764 1971324 pod_ready.go:82] duration metric: took 10.507995257s for pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:35.989775 1971324 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:35.994950 1971324 pod_ready.go:93] pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:35.994974 1971324 pod_ready.go:82] duration metric: took 5.193222ms for pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:35.994984 1971324 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:38.002261 1971324 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:39.002130 1971324 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:39.002163 1971324 pod_ready.go:82] duration metric: took 3.007172332s for pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.002175 1971324 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.007066 1971324 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:39.007092 1971324 pod_ready.go:82] duration metric: took 4.909894ms for pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.007102 1971324 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-q48xh" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.011300 1971324 pod_ready.go:93] pod "kube-proxy-q48xh" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:39.011327 1971324 pod_ready.go:82] duration metric: took 4.217903ms for pod "kube-proxy-q48xh" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.011339 1971324 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.019267 1971324 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:39.019290 1971324 pod_ready.go:82] duration metric: took 7.94282ms for pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.019299 1971324 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:41.026382 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:43.026822 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:45.526641 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:48.025036 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:50.027377 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:52.526770 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:55.026492 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:57.026925 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:59.525553 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:01.527499 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:04.025999 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:06.026498 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:08.526091 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:10.526915 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:13.026831 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:15.028964 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:17.525194 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:19.526034 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:21.527945 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:24.028130 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:26.527177 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:29.026067 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:31.026580 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:33.028333 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:35.527445 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:38.026699 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:40.526689 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:43.026630 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:45.526315 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:47.526520 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:50.026151 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:52.026377 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:54.526778 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:57.027014 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:59.525725 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:01.526915 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:03.529074 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:06.026194 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:08.525961 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:10.527780 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:13.027079 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:15.027945 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:17.526959 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:20.027030 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:22.526513 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:24.527843 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:27.027167 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:29.525787 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:31.526304 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:33.527156 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:36.025541 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:38.027612 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:40.528385 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:43.026341 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:45.027225 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:47.529098 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:50.025867 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:52.026070 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:54.026722 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:56.527827 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:59.026535 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:01.028886 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:03.526512 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:05.527941 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:08.026139 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:10.026227 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:12.027431 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:14.526570 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:16.527187 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:19.028271 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:21.529531 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:24.025073 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:26.026356 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:28.027425 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:30.525634 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:32.526764 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:35.026819 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:37.532230 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:40.026877 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:42.527349 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:45.026552 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:47.027640 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:49.526205 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:51.527819 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:54.026000 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:56.027202 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:58.526277 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:00.527173 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:03.025832 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:05.026627 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:07.027188 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:09.028290 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:11.031964 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:13.525789 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:15.526985 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:18.026476 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:20.027814 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:22.526030 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:24.526922 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:26.527440 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:28.528148 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:31.026333 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:33.527109 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:36.027336 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:38.526086 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:39.020400 1971324 pod_ready.go:82] duration metric: took 4m0.001084886s for pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace to be "Ready" ...
	E0120 14:06:39.020434 1971324 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 14:06:39.020464 1971324 pod_ready.go:39] duration metric: took 4m13.544546991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:06:39.020512 1971324 kubeadm.go:597] duration metric: took 4m20.388785998s to restartPrimaryControlPlane
	W0120 14:06:39.020594 1971324 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:06:39.020633 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:07:06.810143 1971324 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.789476664s)
	I0120 14:07:06.810247 1971324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:07:06.832457 1971324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:07:06.852749 1971324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:07:06.873857 1971324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:07:06.873882 1971324 kubeadm.go:157] found existing configuration files:
	
	I0120 14:07:06.873943 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0120 14:07:06.886791 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:07:06.886875 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:07:06.909304 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0120 14:07:06.925495 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:07:06.925578 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:07:06.946915 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0120 14:07:06.958045 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:07:06.958118 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:07:06.969792 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0120 14:07:06.980477 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:07:06.980546 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:07:06.992154 1971324 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:07:07.047808 1971324 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 14:07:07.048054 1971324 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:07:07.167444 1971324 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:07:07.167631 1971324 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:07:07.167755 1971324 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 14:07:07.176704 1971324 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:07:07.178906 1971324 out.go:235]   - Generating certificates and keys ...
	I0120 14:07:07.179018 1971324 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:07:07.179096 1971324 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:07:07.179214 1971324 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:07:07.179292 1971324 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:07:07.179407 1971324 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:07:07.179531 1971324 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:07:07.179632 1971324 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:07:07.179728 1971324 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:07:07.179830 1971324 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:07:07.179923 1971324 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:07:07.180006 1971324 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:07:07.180105 1971324 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:07:07.399949 1971324 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:07:07.525338 1971324 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 14:07:07.958528 1971324 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:07:08.085273 1971324 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:07:08.227675 1971324 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:07:08.228174 1971324 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:07:08.230880 1971324 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:07:08.232690 1971324 out.go:235]   - Booting up control plane ...
	I0120 14:07:08.232803 1971324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:07:08.232885 1971324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:07:08.233165 1971324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:07:08.255003 1971324 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:07:08.263855 1971324 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:07:08.263966 1971324 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:07:08.414539 1971324 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 14:07:08.414702 1971324 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 14:07:08.915282 1971324 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.191909ms
	I0120 14:07:08.915410 1971324 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 14:07:14.418359 1971324 kubeadm.go:310] [api-check] The API server is healthy after 5.50145508s
	I0120 14:07:14.430935 1971324 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 14:07:14.460608 1971324 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 14:07:14.497450 1971324 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 14:07:14.497787 1971324 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-727256 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 14:07:14.515343 1971324 kubeadm.go:310] [bootstrap-token] Using token: tkd27p.2n22jx81j70drifi
	I0120 14:07:14.516953 1971324 out.go:235]   - Configuring RBAC rules ...
	I0120 14:07:14.517145 1971324 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 14:07:14.535550 1971324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 14:07:14.549490 1971324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 14:07:14.554516 1971324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 14:07:14.559606 1971324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 14:07:14.567744 1971324 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 14:07:14.823696 1971324 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 14:07:15.255724 1971324 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 14:07:15.828061 1971324 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 14:07:15.829612 1971324 kubeadm.go:310] 
	I0120 14:07:15.829720 1971324 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 14:07:15.829734 1971324 kubeadm.go:310] 
	I0120 14:07:15.829934 1971324 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 14:07:15.829961 1971324 kubeadm.go:310] 
	I0120 14:07:15.829995 1971324 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 14:07:15.830134 1971324 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 14:07:15.830216 1971324 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 14:07:15.830238 1971324 kubeadm.go:310] 
	I0120 14:07:15.830300 1971324 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 14:07:15.830307 1971324 kubeadm.go:310] 
	I0120 14:07:15.830345 1971324 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 14:07:15.830351 1971324 kubeadm.go:310] 
	I0120 14:07:15.830452 1971324 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 14:07:15.830564 1971324 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 14:07:15.830687 1971324 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 14:07:15.830712 1971324 kubeadm.go:310] 
	I0120 14:07:15.830839 1971324 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 14:07:15.830917 1971324 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 14:07:15.830928 1971324 kubeadm.go:310] 
	I0120 14:07:15.831050 1971324 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token tkd27p.2n22jx81j70drifi \
	I0120 14:07:15.831203 1971324 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 \
	I0120 14:07:15.831236 1971324 kubeadm.go:310] 	--control-plane 
	I0120 14:07:15.831250 1971324 kubeadm.go:310] 
	I0120 14:07:15.831373 1971324 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 14:07:15.831381 1971324 kubeadm.go:310] 
	I0120 14:07:15.831510 1971324 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token tkd27p.2n22jx81j70drifi \
	I0120 14:07:15.831608 1971324 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 
	I0120 14:07:15.832608 1971324 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:07:15.832644 1971324 cni.go:84] Creating CNI manager for ""
	I0120 14:07:15.832665 1971324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:07:15.834574 1971324 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:07:15.836200 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:07:15.852486 1971324 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:07:15.883072 1971324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:07:15.883163 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:15.883217 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-727256 minikube.k8s.io/updated_at=2025_01_20T14_07_15_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=default-k8s-diff-port-727256 minikube.k8s.io/primary=true
	I0120 14:07:15.919057 1971324 ops.go:34] apiserver oom_adj: -16
	I0120 14:07:16.264800 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:16.765768 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:17.265700 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:17.765591 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:18.265120 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:18.765375 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:19.265828 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:19.765233 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:19.871124 1971324 kubeadm.go:1113] duration metric: took 3.988031359s to wait for elevateKubeSystemPrivileges
	I0120 14:07:19.871168 1971324 kubeadm.go:394] duration metric: took 5m1.294931591s to StartCluster
	I0120 14:07:19.871195 1971324 settings.go:142] acquiring lock: {Name:mka51395421b81550a6a78e164d8aa21a9ede347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:07:19.871308 1971324 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:07:19.872935 1971324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:07:19.873227 1971324 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:07:19.873360 1971324 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:07:19.873432 1971324 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-727256"
	I0120 14:07:19.873448 1971324 config.go:182] Loaded profile config "default-k8s-diff-port-727256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:07:19.873475 1971324 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-727256"
	I0120 14:07:19.873456 1971324 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-727256"
	W0120 14:07:19.873525 1971324 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:07:19.873515 1971324 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-727256"
	I0120 14:07:19.873512 1971324 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-727256"
	I0120 14:07:19.873579 1971324 host.go:66] Checking if "default-k8s-diff-port-727256" exists ...
	I0120 14:07:19.873591 1971324 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-727256"
	W0120 14:07:19.873602 1971324 addons.go:247] addon dashboard should already be in state true
	I0120 14:07:19.873461 1971324 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-727256"
	I0120 14:07:19.873645 1971324 host.go:66] Checking if "default-k8s-diff-port-727256" exists ...
	I0120 14:07:19.873644 1971324 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-727256"
	W0120 14:07:19.873658 1971324 addons.go:247] addon metrics-server should already be in state true
	I0120 14:07:19.873693 1971324 host.go:66] Checking if "default-k8s-diff-port-727256" exists ...
	I0120 14:07:19.873994 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.874028 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.874069 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.874104 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.874122 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.874160 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.874182 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.874249 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.875156 1971324 out.go:177] * Verifying Kubernetes components...
	I0120 14:07:19.877548 1971324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:07:19.894903 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41867
	I0120 14:07:19.895611 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I0120 14:07:19.895799 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
	I0120 14:07:19.895810 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46707
	I0120 14:07:19.896235 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.896371 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.896374 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.896427 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.896946 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.896965 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.897049 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.897061 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.897097 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.897109 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.897171 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.897179 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.897407 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.897504 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.897554 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.897763 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.897815 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.898170 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.898210 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.898503 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.898556 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.899598 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.899642 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.901013 1971324 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-727256"
	W0120 14:07:19.901024 1971324 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:07:19.901047 1971324 host.go:66] Checking if "default-k8s-diff-port-727256" exists ...
	I0120 14:07:19.901256 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.901294 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.921489 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
	I0120 14:07:19.922200 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.922354 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
	I0120 14:07:19.922487 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0120 14:07:19.923012 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.923115 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.923351 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.923371 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.923750 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.923773 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.923903 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.924012 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.924035 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.924227 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.925245 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.925523 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.926174 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.926409 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.926777 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0120 14:07:19.927338 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.927812 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:07:19.928588 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.928606 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.928749 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:07:19.928849 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:07:19.929144 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.929629 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.929667 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.930118 1971324 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:07:19.931197 1971324 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:07:19.931224 1971324 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:07:19.933008 1971324 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:07:19.933033 1971324 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:07:19.933058 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:07:19.933259 1971324 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:07:19.933369 1971324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:07:19.933389 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:07:19.933347 1971324 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:07:19.934800 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:07:19.934818 1971324 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:07:19.934847 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:07:19.937550 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.937957 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:07:19.937999 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.938124 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:07:19.938295 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:07:19.938406 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:07:19.938486 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:07:19.938817 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.940255 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.940625 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:07:19.940648 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.940917 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:07:19.940993 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:07:19.941018 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.941159 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:07:19.941305 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:07:19.941350 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:07:19.941478 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:07:19.941512 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:07:19.941902 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:07:19.942284 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:07:19.948962 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34819
	I0120 14:07:19.949405 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.949966 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.949989 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.950388 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.950699 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.952288 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:07:19.952507 1971324 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:07:19.952523 1971324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:07:19.952542 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:07:19.956242 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.956713 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:07:19.956743 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.956859 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:07:19.957008 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:07:19.957169 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:07:19.957470 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:07:20.127114 1971324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:07:20.154612 1971324 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-727256" to be "Ready" ...
	I0120 14:07:20.192263 1971324 node_ready.go:49] node "default-k8s-diff-port-727256" has status "Ready":"True"
	I0120 14:07:20.192290 1971324 node_ready.go:38] duration metric: took 37.635597ms for node "default-k8s-diff-port-727256" to be "Ready" ...
	I0120 14:07:20.192301 1971324 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:07:20.213859 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:07:20.213892 1971324 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:07:20.231942 1971324 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:20.258778 1971324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:07:20.282980 1971324 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:07:20.283031 1971324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:07:20.283840 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:07:20.283868 1971324 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:07:20.313871 1971324 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:07:20.313902 1971324 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:07:20.343875 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:07:20.343906 1971324 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:07:20.366130 1971324 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:07:20.366161 1971324 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:07:20.377530 1971324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:07:20.391855 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:07:20.391890 1971324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:07:20.422771 1971324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:07:20.490042 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:07:20.490070 1971324 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 14:07:20.668552 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:20.668581 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:20.668941 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:20.669010 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:20.669026 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:20.669028 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:20.669036 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:20.669363 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:20.669390 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:20.675996 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:20.676026 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:20.676331 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:20.676388 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:20.676354 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:20.680026 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:07:20.680052 1971324 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:07:20.807657 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:07:20.807698 1971324 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:07:20.876039 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:07:20.876068 1971324 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 14:07:20.999452 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:07:20.999483 1971324 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 14:07:21.023485 1971324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:07:21.643979 1971324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266406433s)
	I0120 14:07:21.644056 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:21.644071 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:21.644447 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:21.644477 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:21.644506 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:21.644521 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:21.644831 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:21.644845 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:22.256978 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace has status "Ready":"False"
	I0120 14:07:22.324244 1971324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.901426994s)
	I0120 14:07:22.324341 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:22.324361 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:22.324787 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:22.324849 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:22.324866 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:22.324875 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:22.324883 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:22.325248 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:22.325278 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:22.325285 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:22.325302 1971324 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-727256"
	I0120 14:07:23.339621 1971324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.316057578s)
	I0120 14:07:23.339712 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:23.339732 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:23.340118 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:23.340201 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:23.340216 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:23.340225 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:23.340136 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:23.340517 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:23.342106 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:23.342125 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:23.343861 1971324 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-727256 addons enable metrics-server
	
	I0120 14:07:23.345414 1971324 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 14:07:23.346269 1971324 addons.go:514] duration metric: took 3.472914176s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 14:07:24.739396 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace has status "Ready":"False"
	I0120 14:07:26.739597 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace has status "Ready":"False"
	I0120 14:07:27.738986 1971324 pod_ready.go:93] pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.739017 1971324 pod_ready.go:82] duration metric: took 7.507037469s for pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.739032 1971324 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-v22vm" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.745501 1971324 pod_ready.go:93] pod "coredns-668d6bf9bc-v22vm" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.745528 1971324 pod_ready.go:82] duration metric: took 6.487852ms for pod "coredns-668d6bf9bc-v22vm" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.745540 1971324 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.750780 1971324 pod_ready.go:93] pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.750815 1971324 pod_ready.go:82] duration metric: took 5.263354ms for pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.750829 1971324 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.757357 1971324 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.757387 1971324 pod_ready.go:82] duration metric: took 6.549516ms for pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.757400 1971324 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.763302 1971324 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.763332 1971324 pod_ready.go:82] duration metric: took 5.92298ms for pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.763347 1971324 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6vtjs" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:28.139358 1971324 pod_ready.go:93] pod "kube-proxy-6vtjs" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:28.139385 1971324 pod_ready.go:82] duration metric: took 376.030461ms for pod "kube-proxy-6vtjs" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:28.139395 1971324 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:28.536558 1971324 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:28.536595 1971324 pod_ready.go:82] duration metric: took 397.192361ms for pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:28.536609 1971324 pod_ready.go:39] duration metric: took 8.344296802s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:07:28.536633 1971324 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:07:28.536700 1971324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:07:28.573027 1971324 api_server.go:72] duration metric: took 8.699758175s to wait for apiserver process to appear ...
	I0120 14:07:28.573068 1971324 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:07:28.573101 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:07:28.578383 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0120 14:07:28.579376 1971324 api_server.go:141] control plane version: v1.32.0
	I0120 14:07:28.579402 1971324 api_server.go:131] duration metric: took 6.325441ms to wait for apiserver health ...
	I0120 14:07:28.579413 1971324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:07:28.743059 1971324 system_pods.go:59] 9 kube-system pods found
	I0120 14:07:28.743094 1971324 system_pods.go:61] "coredns-668d6bf9bc-l4rmh" [06f4698d-c393-4f30-b8de-77ade02b575e] Running
	I0120 14:07:28.743100 1971324 system_pods.go:61] "coredns-668d6bf9bc-v22vm" [95644362-4ab9-405f-b433-5b384ab083d1] Running
	I0120 14:07:28.743104 1971324 system_pods.go:61] "etcd-default-k8s-diff-port-727256" [888345c9-ff71-44eb-9501-6a878f6e7fce] Running
	I0120 14:07:28.743108 1971324 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-727256" [2c11d7e2-9f34-4861-977b-7559572c5eb9] Running
	I0120 14:07:28.743111 1971324 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-727256" [f6202668-dca8-46a8-9ac2-d58b96bda552] Running
	I0120 14:07:28.743115 1971324 system_pods.go:61] "kube-proxy-6vtjs" [d57cfd3b-d6bd-4e61-a606-b2451a3768ca] Running
	I0120 14:07:28.743118 1971324 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-727256" [764e1f75-6402-4ce2-9d44-5d8af5dbb0e8] Running
	I0120 14:07:28.743124 1971324 system_pods.go:61] "metrics-server-f79f97bbb-kp5hl" [190513f9-3e9f-4705-ae23-9481987802f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:07:28.743129 1971324 system_pods.go:61] "storage-provisioner" [0f716b6a-f5d2-49a0-a810-e0cdf72a3020] Running
	I0120 14:07:28.743136 1971324 system_pods.go:74] duration metric: took 163.71699ms to wait for pod list to return data ...
	I0120 14:07:28.743145 1971324 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:07:28.937247 1971324 default_sa.go:45] found service account: "default"
	I0120 14:07:28.937280 1971324 default_sa.go:55] duration metric: took 194.12949ms for default service account to be created ...
	I0120 14:07:28.937291 1971324 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:07:29.391088 1971324 system_pods.go:87] 9 kube-system pods found

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-727256 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0": signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-727256 -n default-k8s-diff-port-727256
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-727256 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-727256 logs -n 25: (2.841093465s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-798303 sudo systemctl                        | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | cat kubelet --no-pager                               |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo journalctl                       | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | -xeu kubelet --all --full                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo cat                              | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo cat                              | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo systemctl                        | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC |                     |
	|         | status docker --all --full                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo systemctl                        | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | cat docker --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo cat                              | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo docker                           | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo systemctl                        | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC |                     |
	|         | status cri-docker --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo systemctl                        | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | cat cri-docker --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo cat                              | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo cat                              | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo                                  | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo systemctl                        | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo systemctl                        | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo cat                              | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo cat                              | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo containerd                       | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo systemctl                        | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo systemctl                        | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo find                             | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-798303 sudo crio                             | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-798303                                       | auto-798303           | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC | 20 Jan 25 14:27 UTC |
	| start   | -p custom-flannel-798303                             | custom-flannel-798303 | jenkins | v1.35.0 | 20 Jan 25 14:27 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| ssh     | -p kindnet-798303 pgrep -a                           | kindnet-798303        | jenkins | v1.35.0 | 20 Jan 25 14:28 UTC | 20 Jan 25 14:28 UTC |
	|         | kubelet                                              |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 14:27:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 14:27:47.472445 1981052 out.go:345] Setting OutFile to fd 1 ...
	I0120 14:27:47.473011 1981052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:27:47.476218 1981052 out.go:358] Setting ErrFile to fd 2...
	I0120 14:27:47.476304 1981052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:27:47.476670 1981052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 14:27:47.477550 1981052 out.go:352] Setting JSON to false
	I0120 14:27:47.479230 1981052 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":22213,"bootTime":1737361054,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 14:27:47.479330 1981052 start.go:139] virtualization: kvm guest
	I0120 14:27:47.481505 1981052 out.go:177] * [custom-flannel-798303] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 14:27:47.483282 1981052 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 14:27:47.483296 1981052 notify.go:220] Checking for updates...
	I0120 14:27:47.485727 1981052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 14:27:47.487301 1981052 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:27:47.488838 1981052 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 14:27:47.490378 1981052 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 14:27:47.491725 1981052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 14:27:47.493817 1981052 config.go:182] Loaded profile config "calico-798303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:27:47.494022 1981052 config.go:182] Loaded profile config "default-k8s-diff-port-727256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:27:47.494176 1981052 config.go:182] Loaded profile config "kindnet-798303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:27:47.494324 1981052 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 14:27:47.539354 1981052 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 14:27:47.540426 1981052 start.go:297] selected driver: kvm2
	I0120 14:27:47.540440 1981052 start.go:901] validating driver "kvm2" against <nil>
	I0120 14:27:47.540452 1981052 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 14:27:47.541456 1981052 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:27:47.541580 1981052 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-1920423/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 14:27:47.559082 1981052 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 14:27:47.559140 1981052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 14:27:47.559399 1981052 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 14:27:47.559431 1981052 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0120 14:27:47.559445 1981052 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0120 14:27:47.559520 1981052 start.go:340] cluster config:
	{Name:custom-flannel-798303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-798303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:27:47.559639 1981052 iso.go:125] acquiring lock: {Name:mk18c81b0efa5cc8efe9d47dc52684752f206ae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:27:47.561284 1981052 out.go:177] * Starting "custom-flannel-798303" primary control-plane node in "custom-flannel-798303" cluster
	I0120 14:27:47.691600 1979051 kubeadm.go:310] [api-check] The API server is healthy after 5.501263738s
	I0120 14:27:47.707730 1979051 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 14:27:47.723516 1979051 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 14:27:47.758507 1979051 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 14:27:47.758743 1979051 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-798303 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 14:27:47.774269 1979051 kubeadm.go:310] [bootstrap-token] Using token: 4c3gvu.ppb29zvbggfghq7i
	I0120 14:27:47.775710 1979051 out.go:235]   - Configuring RBAC rules ...
	I0120 14:27:47.775862 1979051 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 14:27:47.790930 1979051 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 14:27:47.802325 1979051 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 14:27:47.807914 1979051 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 14:27:47.814328 1979051 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 14:27:47.818888 1979051 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 14:27:48.097984 1979051 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 14:27:48.549612 1979051 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 14:27:49.104744 1979051 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 14:27:49.104799 1979051 kubeadm.go:310] 
	I0120 14:27:49.104884 1979051 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 14:27:49.104904 1979051 kubeadm.go:310] 
	I0120 14:27:49.105028 1979051 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 14:27:49.105051 1979051 kubeadm.go:310] 
	I0120 14:27:49.105093 1979051 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 14:27:49.105182 1979051 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 14:27:49.105251 1979051 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 14:27:49.105269 1979051 kubeadm.go:310] 
	I0120 14:27:49.105313 1979051 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 14:27:49.105320 1979051 kubeadm.go:310] 
	I0120 14:27:49.105364 1979051 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 14:27:49.105371 1979051 kubeadm.go:310] 
	I0120 14:27:49.105423 1979051 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 14:27:49.105535 1979051 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 14:27:49.105640 1979051 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 14:27:49.105657 1979051 kubeadm.go:310] 
	I0120 14:27:49.105772 1979051 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 14:27:49.105887 1979051 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 14:27:49.105902 1979051 kubeadm.go:310] 
	I0120 14:27:49.106015 1979051 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4c3gvu.ppb29zvbggfghq7i \
	I0120 14:27:49.106164 1979051 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 \
	I0120 14:27:49.106201 1979051 kubeadm.go:310] 	--control-plane 
	I0120 14:27:49.106212 1979051 kubeadm.go:310] 
	I0120 14:27:49.106355 1979051 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 14:27:49.106372 1979051 kubeadm.go:310] 
	I0120 14:27:49.106480 1979051 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4c3gvu.ppb29zvbggfghq7i \
	I0120 14:27:49.106639 1979051 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 
	I0120 14:27:49.107133 1979051 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:27:49.107170 1979051 cni.go:84] Creating CNI manager for "kindnet"
	I0120 14:27:49.109005 1979051 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0120 14:27:47.167715 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:47.168344 1979335 main.go:141] libmachine: (calico-798303) DBG | unable to find current IP address of domain calico-798303 in network mk-calico-798303
	I0120 14:27:47.168375 1979335 main.go:141] libmachine: (calico-798303) DBG | I0120 14:27:47.168317 1979586 retry.go:31] will retry after 3.333760111s: waiting for domain to come up
	I0120 14:27:49.110416 1979051 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0120 14:27:49.116948 1979051 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
	I0120 14:27:49.116971 1979051 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0120 14:27:49.143140 1979051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0120 14:27:49.434704 1979051 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:27:49.434793 1979051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:27:49.434848 1979051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-798303 minikube.k8s.io/updated_at=2025_01_20T14_27_49_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=kindnet-798303 minikube.k8s.io/primary=true
	I0120 14:27:49.688597 1979051 ops.go:34] apiserver oom_adj: -16
	I0120 14:27:49.688785 1979051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:27:50.189724 1979051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:27:50.689904 1979051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:27:47.562437 1981052 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 14:27:47.562482 1981052 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 14:27:47.562492 1981052 cache.go:56] Caching tarball of preloaded images
	I0120 14:27:47.562633 1981052 preload.go:172] Found /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 14:27:47.562650 1981052 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 14:27:47.562788 1981052 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/custom-flannel-798303/config.json ...
	I0120 14:27:47.562823 1981052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/custom-flannel-798303/config.json: {Name:mk6b766b3688668198283a1101622802ec71fe21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:27:47.563011 1981052 start.go:360] acquireMachinesLock for custom-flannel-798303: {Name:mkbf148c6fec0c722eed081be3cef9d0990100c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 14:27:51.188971 1979051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:27:51.689558 1979051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:27:52.189728 1979051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:27:52.689237 1979051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:27:53.188923 1979051 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:27:53.325340 1979051 kubeadm.go:1113] duration metric: took 3.890617886s to wait for elevateKubeSystemPrivileges
	I0120 14:27:53.325387 1979051 kubeadm.go:394] duration metric: took 16.27399806s to StartCluster
	I0120 14:27:53.325438 1979051 settings.go:142] acquiring lock: {Name:mka51395421b81550a6a78e164d8aa21a9ede347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:27:53.325553 1979051 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:27:53.326700 1979051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:27:53.326940 1979051 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.127 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:27:53.327022 1979051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 14:27:53.327047 1979051 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:27:53.327162 1979051 addons.go:69] Setting storage-provisioner=true in profile "kindnet-798303"
	I0120 14:27:53.327186 1979051 addons.go:238] Setting addon storage-provisioner=true in "kindnet-798303"
	I0120 14:27:53.327187 1979051 addons.go:69] Setting default-storageclass=true in profile "kindnet-798303"
	I0120 14:27:53.327198 1979051 config.go:182] Loaded profile config "kindnet-798303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:27:53.327210 1979051 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-798303"
	I0120 14:27:53.327230 1979051 host.go:66] Checking if "kindnet-798303" exists ...
	I0120 14:27:53.327630 1979051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:27:53.327672 1979051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:27:53.327707 1979051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:27:53.327748 1979051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:27:53.328485 1979051 out.go:177] * Verifying Kubernetes components...
	I0120 14:27:53.329866 1979051 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:27:53.345796 1979051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0120 14:27:53.345822 1979051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40935
	I0120 14:27:53.346388 1979051 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:27:53.346410 1979051 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:27:53.346985 1979051 main.go:141] libmachine: Using API Version  1
	I0120 14:27:53.347015 1979051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:27:53.347162 1979051 main.go:141] libmachine: Using API Version  1
	I0120 14:27:53.347190 1979051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:27:53.347388 1979051 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:27:53.347588 1979051 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:27:53.347641 1979051 main.go:141] libmachine: (kindnet-798303) Calling .GetState
	I0120 14:27:53.348203 1979051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:27:53.348255 1979051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:27:53.351700 1979051 addons.go:238] Setting addon default-storageclass=true in "kindnet-798303"
	I0120 14:27:53.351779 1979051 host.go:66] Checking if "kindnet-798303" exists ...
	I0120 14:27:53.352209 1979051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:27:53.352265 1979051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:27:53.367632 1979051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33869
	I0120 14:27:53.368295 1979051 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:27:53.368918 1979051 main.go:141] libmachine: Using API Version  1
	I0120 14:27:53.368942 1979051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:27:53.369094 1979051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45517
	I0120 14:27:53.369568 1979051 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:27:53.369617 1979051 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:27:53.369898 1979051 main.go:141] libmachine: (kindnet-798303) Calling .GetState
	I0120 14:27:53.370078 1979051 main.go:141] libmachine: Using API Version  1
	I0120 14:27:53.370099 1979051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:27:53.370495 1979051 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:27:53.371157 1979051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:27:53.371220 1979051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:27:53.372522 1979051 main.go:141] libmachine: (kindnet-798303) Calling .DriverName
	I0120 14:27:53.374694 1979051 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:27:53.376108 1979051 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:27:53.376132 1979051 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:27:53.376159 1979051 main.go:141] libmachine: (kindnet-798303) Calling .GetSSHHostname
	I0120 14:27:53.380649 1979051 main.go:141] libmachine: (kindnet-798303) DBG | domain kindnet-798303 has defined MAC address 52:54:00:06:4e:03 in network mk-kindnet-798303
	I0120 14:27:53.381131 1979051 main.go:141] libmachine: (kindnet-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:4e:03", ip: ""} in network mk-kindnet-798303: {Iface:virbr3 ExpiryTime:2025-01-20 15:27:22 +0000 UTC Type:0 Mac:52:54:00:06:4e:03 Iaid: IPaddr:192.168.61.127 Prefix:24 Hostname:kindnet-798303 Clientid:01:52:54:00:06:4e:03}
	I0120 14:27:53.381163 1979051 main.go:141] libmachine: (kindnet-798303) DBG | domain kindnet-798303 has defined IP address 192.168.61.127 and MAC address 52:54:00:06:4e:03 in network mk-kindnet-798303
	I0120 14:27:53.381484 1979051 main.go:141] libmachine: (kindnet-798303) Calling .GetSSHPort
	I0120 14:27:53.381695 1979051 main.go:141] libmachine: (kindnet-798303) Calling .GetSSHKeyPath
	I0120 14:27:53.381922 1979051 main.go:141] libmachine: (kindnet-798303) Calling .GetSSHUsername
	I0120 14:27:53.382089 1979051 sshutil.go:53] new ssh client: &{IP:192.168.61.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kindnet-798303/id_rsa Username:docker}
	I0120 14:27:53.389498 1979051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43219
	I0120 14:27:53.390079 1979051 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:27:53.390702 1979051 main.go:141] libmachine: Using API Version  1
	I0120 14:27:53.390731 1979051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:27:53.391096 1979051 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:27:53.391333 1979051 main.go:141] libmachine: (kindnet-798303) Calling .GetState
	I0120 14:27:53.393248 1979051 main.go:141] libmachine: (kindnet-798303) Calling .DriverName
	I0120 14:27:53.393479 1979051 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:27:53.393500 1979051 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:27:53.393522 1979051 main.go:141] libmachine: (kindnet-798303) Calling .GetSSHHostname
	I0120 14:27:53.396587 1979051 main.go:141] libmachine: (kindnet-798303) DBG | domain kindnet-798303 has defined MAC address 52:54:00:06:4e:03 in network mk-kindnet-798303
	I0120 14:27:53.397051 1979051 main.go:141] libmachine: (kindnet-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:4e:03", ip: ""} in network mk-kindnet-798303: {Iface:virbr3 ExpiryTime:2025-01-20 15:27:22 +0000 UTC Type:0 Mac:52:54:00:06:4e:03 Iaid: IPaddr:192.168.61.127 Prefix:24 Hostname:kindnet-798303 Clientid:01:52:54:00:06:4e:03}
	I0120 14:27:53.397082 1979051 main.go:141] libmachine: (kindnet-798303) DBG | domain kindnet-798303 has defined IP address 192.168.61.127 and MAC address 52:54:00:06:4e:03 in network mk-kindnet-798303
	I0120 14:27:53.397281 1979051 main.go:141] libmachine: (kindnet-798303) Calling .GetSSHPort
	I0120 14:27:53.397481 1979051 main.go:141] libmachine: (kindnet-798303) Calling .GetSSHKeyPath
	I0120 14:27:53.397660 1979051 main.go:141] libmachine: (kindnet-798303) Calling .GetSSHUsername
	I0120 14:27:53.397849 1979051 sshutil.go:53] new ssh client: &{IP:192.168.61.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/kindnet-798303/id_rsa Username:docker}
	I0120 14:27:53.668551 1979051 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 14:27:53.668558 1979051 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:27:53.700599 1979051 node_ready.go:35] waiting up to 15m0s for node "kindnet-798303" to be "Ready" ...
	I0120 14:27:53.783172 1979051 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:27:53.815162 1979051 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:27:54.263895 1979051 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0120 14:27:54.633831 1979051 main.go:141] libmachine: Making call to close driver server
	I0120 14:27:54.633857 1979051 main.go:141] libmachine: (kindnet-798303) Calling .Close
	I0120 14:27:54.634001 1979051 main.go:141] libmachine: Making call to close driver server
	I0120 14:27:54.634028 1979051 main.go:141] libmachine: (kindnet-798303) Calling .Close
	I0120 14:27:54.634240 1979051 main.go:141] libmachine: (kindnet-798303) DBG | Closing plugin on server side
	I0120 14:27:54.634294 1979051 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:27:54.634305 1979051 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:27:54.634319 1979051 main.go:141] libmachine: Making call to close driver server
	I0120 14:27:54.634330 1979051 main.go:141] libmachine: (kindnet-798303) Calling .Close
	I0120 14:27:54.634459 1979051 main.go:141] libmachine: (kindnet-798303) DBG | Closing plugin on server side
	I0120 14:27:54.634459 1979051 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:27:54.634515 1979051 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:27:54.634572 1979051 main.go:141] libmachine: (kindnet-798303) DBG | Closing plugin on server side
	I0120 14:27:54.634585 1979051 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:27:54.634623 1979051 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:27:54.634637 1979051 main.go:141] libmachine: Making call to close driver server
	I0120 14:27:54.634650 1979051 main.go:141] libmachine: (kindnet-798303) Calling .Close
	I0120 14:27:54.634883 1979051 main.go:141] libmachine: (kindnet-798303) DBG | Closing plugin on server side
	I0120 14:27:54.634924 1979051 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:27:54.634957 1979051 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:27:54.647584 1979051 main.go:141] libmachine: Making call to close driver server
	I0120 14:27:54.647602 1979051 main.go:141] libmachine: (kindnet-798303) Calling .Close
	I0120 14:27:54.647953 1979051 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:27:54.647975 1979051 main.go:141] libmachine: (kindnet-798303) DBG | Closing plugin on server side
	I0120 14:27:54.648005 1979051 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:27:54.651813 1979051 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0120 14:27:50.504080 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:50.504591 1979335 main.go:141] libmachine: (calico-798303) DBG | unable to find current IP address of domain calico-798303 in network mk-calico-798303
	I0120 14:27:50.504627 1979335 main.go:141] libmachine: (calico-798303) DBG | I0120 14:27:50.504572 1979586 retry.go:31] will retry after 5.412289392s: waiting for domain to come up
	I0120 14:27:54.653817 1979051 addons.go:514] duration metric: took 1.32677702s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0120 14:27:54.769787 1979051 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-798303" context rescaled to 1 replicas
	I0120 14:27:55.704281 1979051 node_ready.go:53] node "kindnet-798303" has status "Ready":"False"
	I0120 14:27:57.384255 1981052 start.go:364] duration metric: took 9.821184769s to acquireMachinesLock for "custom-flannel-798303"
	I0120 14:27:57.384330 1981052 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-798303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flanne
l-798303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:27:57.384473 1981052 start.go:125] createHost starting for "" (driver="kvm2")
	I0120 14:27:57.386783 1981052 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0120 14:27:57.387018 1981052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:27:57.387083 1981052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:27:57.408081 1981052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40815
	I0120 14:27:57.408646 1981052 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:27:57.409340 1981052 main.go:141] libmachine: Using API Version  1
	I0120 14:27:57.409392 1981052 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:27:57.409766 1981052 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:27:57.409983 1981052 main.go:141] libmachine: (custom-flannel-798303) Calling .GetMachineName
	I0120 14:27:57.410186 1981052 main.go:141] libmachine: (custom-flannel-798303) Calling .DriverName
	I0120 14:27:57.410348 1981052 start.go:159] libmachine.API.Create for "custom-flannel-798303" (driver="kvm2")
	I0120 14:27:57.410381 1981052 client.go:168] LocalClient.Create starting
	I0120 14:27:57.410420 1981052 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem
	I0120 14:27:57.410461 1981052 main.go:141] libmachine: Decoding PEM data...
	I0120 14:27:57.410484 1981052 main.go:141] libmachine: Parsing certificate...
	I0120 14:27:57.410551 1981052 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem
	I0120 14:27:57.410580 1981052 main.go:141] libmachine: Decoding PEM data...
	I0120 14:27:57.410597 1981052 main.go:141] libmachine: Parsing certificate...
	I0120 14:27:57.410659 1981052 main.go:141] libmachine: Running pre-create checks...
	I0120 14:27:57.410681 1981052 main.go:141] libmachine: (custom-flannel-798303) Calling .PreCreateCheck
	I0120 14:27:57.411081 1981052 main.go:141] libmachine: (custom-flannel-798303) Calling .GetConfigRaw
	I0120 14:27:57.411514 1981052 main.go:141] libmachine: Creating machine...
	I0120 14:27:57.411528 1981052 main.go:141] libmachine: (custom-flannel-798303) Calling .Create
	I0120 14:27:57.411701 1981052 main.go:141] libmachine: (custom-flannel-798303) creating KVM machine...
	I0120 14:27:57.411739 1981052 main.go:141] libmachine: (custom-flannel-798303) creating network...
	I0120 14:27:57.413261 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | found existing default KVM network
	I0120 14:27:57.415302 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:27:57.415093 1981169 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002e4190}
	I0120 14:27:57.415323 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | created network xml: 
	I0120 14:27:57.415334 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | <network>
	I0120 14:27:57.415342 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG |   <name>mk-custom-flannel-798303</name>
	I0120 14:27:57.415354 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG |   <dns enable='no'/>
	I0120 14:27:57.415361 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG |   
	I0120 14:27:57.415402 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0120 14:27:57.415421 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG |     <dhcp>
	I0120 14:27:57.415433 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0120 14:27:57.415448 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG |     </dhcp>
	I0120 14:27:57.415462 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG |   </ip>
	I0120 14:27:57.415469 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG |   
	I0120 14:27:57.415489 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | </network>
	I0120 14:27:57.415495 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | 
	I0120 14:27:57.421676 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | trying to create private KVM network mk-custom-flannel-798303 192.168.39.0/24...
	I0120 14:27:55.919160 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:55.919572 1979335 main.go:141] libmachine: (calico-798303) found domain IP: 192.168.50.150
	I0120 14:27:55.919605 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has current primary IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:55.919614 1979335 main.go:141] libmachine: (calico-798303) reserving static IP address...
	I0120 14:27:55.919912 1979335 main.go:141] libmachine: (calico-798303) DBG | unable to find host DHCP lease matching {name: "calico-798303", mac: "52:54:00:09:ca:f7", ip: "192.168.50.150"} in network mk-calico-798303
	I0120 14:27:56.002525 1979335 main.go:141] libmachine: (calico-798303) DBG | Getting to WaitForSSH function...
	I0120 14:27:56.002564 1979335 main.go:141] libmachine: (calico-798303) reserved static IP address 192.168.50.150 for domain calico-798303
	I0120 14:27:56.002577 1979335 main.go:141] libmachine: (calico-798303) waiting for SSH...
	I0120 14:27:56.005944 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.006543 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:minikube Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:56.006578 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.006733 1979335 main.go:141] libmachine: (calico-798303) DBG | Using SSH client type: external
	I0120 14:27:56.006766 1979335 main.go:141] libmachine: (calico-798303) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/calico-798303/id_rsa (-rw-------)
	I0120 14:27:56.006813 1979335 main.go:141] libmachine: (calico-798303) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/calico-798303/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 14:27:56.006830 1979335 main.go:141] libmachine: (calico-798303) DBG | About to run SSH command:
	I0120 14:27:56.006839 1979335 main.go:141] libmachine: (calico-798303) DBG | exit 0
	I0120 14:27:56.135331 1979335 main.go:141] libmachine: (calico-798303) DBG | SSH cmd err, output: <nil>: 
	I0120 14:27:56.135626 1979335 main.go:141] libmachine: (calico-798303) KVM machine creation complete
	I0120 14:27:56.135992 1979335 main.go:141] libmachine: (calico-798303) Calling .GetConfigRaw
	I0120 14:27:56.136618 1979335 main.go:141] libmachine: (calico-798303) Calling .DriverName
	I0120 14:27:56.136825 1979335 main.go:141] libmachine: (calico-798303) Calling .DriverName
	I0120 14:27:56.136980 1979335 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0120 14:27:56.137004 1979335 main.go:141] libmachine: (calico-798303) Calling .GetState
	I0120 14:27:56.138585 1979335 main.go:141] libmachine: Detecting operating system of created instance...
	I0120 14:27:56.138601 1979335 main.go:141] libmachine: Waiting for SSH to be available...
	I0120 14:27:56.138626 1979335 main.go:141] libmachine: Getting to WaitForSSH function...
	I0120 14:27:56.138635 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHHostname
	I0120 14:27:56.141138 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.141574 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:56.141599 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.141751 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHPort
	I0120 14:27:56.141925 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:27:56.142079 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:27:56.142224 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHUsername
	I0120 14:27:56.142445 1979335 main.go:141] libmachine: Using SSH client type: native
	I0120 14:27:56.142730 1979335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0120 14:27:56.142745 1979335 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0120 14:27:56.250939 1979335 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 14:27:56.250974 1979335 main.go:141] libmachine: Detecting the provisioner...
	I0120 14:27:56.250986 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHHostname
	I0120 14:27:56.254471 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.254911 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:56.254941 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.255127 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHPort
	I0120 14:27:56.255339 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:27:56.255518 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:27:56.255695 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHUsername
	I0120 14:27:56.255874 1979335 main.go:141] libmachine: Using SSH client type: native
	I0120 14:27:56.256145 1979335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0120 14:27:56.256160 1979335 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0120 14:27:56.372017 1979335 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0120 14:27:56.372096 1979335 main.go:141] libmachine: found compatible host: buildroot
	I0120 14:27:56.372104 1979335 main.go:141] libmachine: Provisioning with buildroot...
	I0120 14:27:56.372112 1979335 main.go:141] libmachine: (calico-798303) Calling .GetMachineName
	I0120 14:27:56.372379 1979335 buildroot.go:166] provisioning hostname "calico-798303"
	I0120 14:27:56.372410 1979335 main.go:141] libmachine: (calico-798303) Calling .GetMachineName
	I0120 14:27:56.372613 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHHostname
	I0120 14:27:56.375683 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.376025 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:56.376072 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.376260 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHPort
	I0120 14:27:56.376482 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:27:56.376642 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:27:56.376786 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHUsername
	I0120 14:27:56.377000 1979335 main.go:141] libmachine: Using SSH client type: native
	I0120 14:27:56.377224 1979335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0120 14:27:56.377238 1979335 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-798303 && echo "calico-798303" | sudo tee /etc/hostname
	I0120 14:27:56.503702 1979335 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-798303
	
	I0120 14:27:56.503797 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHHostname
	I0120 14:27:56.506811 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.507183 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:56.507205 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.507414 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHPort
	I0120 14:27:56.507647 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:27:56.507826 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:27:56.507944 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHUsername
	I0120 14:27:56.508114 1979335 main.go:141] libmachine: Using SSH client type: native
	I0120 14:27:56.508321 1979335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0120 14:27:56.508344 1979335 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-798303' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-798303/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-798303' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 14:27:56.631168 1979335 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 14:27:56.631205 1979335 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 14:27:56.631253 1979335 buildroot.go:174] setting up certificates
	I0120 14:27:56.631265 1979335 provision.go:84] configureAuth start
	I0120 14:27:56.631277 1979335 main.go:141] libmachine: (calico-798303) Calling .GetMachineName
	I0120 14:27:56.631619 1979335 main.go:141] libmachine: (calico-798303) Calling .GetIP
	I0120 14:27:56.634769 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.635170 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:56.635199 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.635344 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHHostname
	I0120 14:27:56.637754 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.638193 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:56.638224 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.638415 1979335 provision.go:143] copyHostCerts
	I0120 14:27:56.638501 1979335 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem, removing ...
	I0120 14:27:56.638528 1979335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem
	I0120 14:27:56.638624 1979335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 14:27:56.638752 1979335 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem, removing ...
	I0120 14:27:56.638766 1979335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem
	I0120 14:27:56.638798 1979335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 14:27:56.638878 1979335 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem, removing ...
	I0120 14:27:56.638888 1979335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem
	I0120 14:27:56.638921 1979335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 14:27:56.639000 1979335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.calico-798303 san=[127.0.0.1 192.168.50.150 calico-798303 localhost minikube]
	I0120 14:27:56.709708 1979335 provision.go:177] copyRemoteCerts
	I0120 14:27:56.709774 1979335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 14:27:56.709808 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHHostname
	I0120 14:27:56.712527 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.712971 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:56.713003 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.713163 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHPort
	I0120 14:27:56.713408 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:27:56.713595 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHUsername
	I0120 14:27:56.713799 1979335 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/calico-798303/id_rsa Username:docker}
	I0120 14:27:56.797784 1979335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0120 14:27:56.824780 1979335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 14:27:56.851112 1979335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 14:27:56.877637 1979335 provision.go:87] duration metric: took 246.353188ms to configureAuth
	I0120 14:27:56.877669 1979335 buildroot.go:189] setting minikube options for container-runtime
	I0120 14:27:56.877856 1979335 config.go:182] Loaded profile config "calico-798303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:27:56.877967 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHHostname
	I0120 14:27:56.880839 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.881228 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:56.881259 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:56.881476 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHPort
	I0120 14:27:56.881710 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:27:56.881873 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:27:56.882042 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHUsername
	I0120 14:27:56.882265 1979335 main.go:141] libmachine: Using SSH client type: native
	I0120 14:27:56.882430 1979335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0120 14:27:56.882444 1979335 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 14:27:57.118724 1979335 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 14:27:57.118768 1979335 main.go:141] libmachine: Checking connection to Docker...
	I0120 14:27:57.118782 1979335 main.go:141] libmachine: (calico-798303) Calling .GetURL
	I0120 14:27:57.120157 1979335 main.go:141] libmachine: (calico-798303) DBG | using libvirt version 6000000
	I0120 14:27:57.122920 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:57.123392 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:57.123434 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:57.123666 1979335 main.go:141] libmachine: Docker is up and running!
	I0120 14:27:57.123686 1979335 main.go:141] libmachine: Reticulating splines...
	I0120 14:27:57.123693 1979335 client.go:171] duration metric: took 27.209394105s to LocalClient.Create
	I0120 14:27:57.123719 1979335 start.go:167] duration metric: took 27.209462009s to libmachine.API.Create "calico-798303"
	I0120 14:27:57.123734 1979335 start.go:293] postStartSetup for "calico-798303" (driver="kvm2")
	I0120 14:27:57.123750 1979335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 14:27:57.123775 1979335 main.go:141] libmachine: (calico-798303) Calling .DriverName
	I0120 14:27:57.124042 1979335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 14:27:57.124068 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHHostname
	I0120 14:27:57.126513 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:57.126902 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:57.126924 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:57.127121 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHPort
	I0120 14:27:57.127293 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:27:57.127420 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHUsername
	I0120 14:27:57.127548 1979335 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/calico-798303/id_rsa Username:docker}
	I0120 14:27:57.218107 1979335 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 14:27:57.222894 1979335 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 14:27:57.222924 1979335 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 14:27:57.222988 1979335 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 14:27:57.223065 1979335 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem -> 19276722.pem in /etc/ssl/certs
	I0120 14:27:57.223166 1979335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 14:27:57.236278 1979335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:27:57.262916 1979335 start.go:296] duration metric: took 139.161631ms for postStartSetup
	I0120 14:27:57.262979 1979335 main.go:141] libmachine: (calico-798303) Calling .GetConfigRaw
	I0120 14:27:57.263635 1979335 main.go:141] libmachine: (calico-798303) Calling .GetIP
	I0120 14:27:57.266510 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:57.266900 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:57.266932 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:57.267202 1979335 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/config.json ...
	I0120 14:27:57.267452 1979335 start.go:128] duration metric: took 27.37846791s to createHost
	I0120 14:27:57.267482 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHHostname
	I0120 14:27:57.269942 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:57.270261 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:57.270292 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:57.270512 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHPort
	I0120 14:27:57.270692 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:27:57.270827 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:27:57.270984 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHUsername
	I0120 14:27:57.271112 1979335 main.go:141] libmachine: Using SSH client type: native
	I0120 14:27:57.271293 1979335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.50.150 22 <nil> <nil>}
	I0120 14:27:57.271306 1979335 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 14:27:57.384045 1979335 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737383277.356878360
	
	I0120 14:27:57.384073 1979335 fix.go:216] guest clock: 1737383277.356878360
	I0120 14:27:57.384083 1979335 fix.go:229] Guest: 2025-01-20 14:27:57.35687836 +0000 UTC Remote: 2025-01-20 14:27:57.267468504 +0000 UTC m=+47.546387954 (delta=89.409856ms)
	I0120 14:27:57.384133 1979335 fix.go:200] guest clock delta is within tolerance: 89.409856ms
	I0120 14:27:57.384146 1979335 start.go:83] releasing machines lock for "calico-798303", held for 27.495331612s
	I0120 14:27:57.384187 1979335 main.go:141] libmachine: (calico-798303) Calling .DriverName
	I0120 14:27:57.384533 1979335 main.go:141] libmachine: (calico-798303) Calling .GetIP
	I0120 14:27:57.387672 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:57.388039 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:57.388087 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:57.388289 1979335 main.go:141] libmachine: (calico-798303) Calling .DriverName
	I0120 14:27:57.388832 1979335 main.go:141] libmachine: (calico-798303) Calling .DriverName
	I0120 14:27:57.389087 1979335 main.go:141] libmachine: (calico-798303) Calling .DriverName
	I0120 14:27:57.389177 1979335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 14:27:57.389227 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHHostname
	I0120 14:27:57.389325 1979335 ssh_runner.go:195] Run: cat /version.json
	I0120 14:27:57.389352 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHHostname
	I0120 14:27:57.392076 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:57.392437 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:57.392466 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:57.392503 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:57.392665 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHPort
	I0120 14:27:57.392849 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:27:57.392896 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:57.392922 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:57.393003 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHUsername
	I0120 14:27:57.393099 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHPort
	I0120 14:27:57.393185 1979335 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/calico-798303/id_rsa Username:docker}
	I0120 14:27:57.393211 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:27:57.393315 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHUsername
	I0120 14:27:57.393476 1979335 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/calico-798303/id_rsa Username:docker}
	I0120 14:27:57.502740 1979335 ssh_runner.go:195] Run: systemctl --version
	I0120 14:27:57.510900 1979335 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 14:27:57.687010 1979335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 14:27:57.694346 1979335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 14:27:57.694417 1979335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 14:27:57.712855 1979335 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 14:27:57.712877 1979335 start.go:495] detecting cgroup driver to use...
	I0120 14:27:57.712964 1979335 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 14:27:57.731022 1979335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 14:27:57.748357 1979335 docker.go:217] disabling cri-docker service (if available) ...
	I0120 14:27:57.748434 1979335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 14:27:57.764451 1979335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 14:27:57.778794 1979335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 14:27:57.903425 1979335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 14:27:58.053801 1979335 docker.go:233] disabling docker service ...
	I0120 14:27:58.053892 1979335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 14:27:58.072776 1979335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 14:27:58.088288 1979335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 14:27:58.229529 1979335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 14:27:58.353529 1979335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 14:27:58.369401 1979335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 14:27:58.389548 1979335 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 14:27:58.389626 1979335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:27:58.401515 1979335 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 14:27:58.401593 1979335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:27:58.415113 1979335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:27:58.428620 1979335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:27:58.442913 1979335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 14:27:58.454913 1979335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:27:58.466198 1979335 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:27:58.485458 1979335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:27:58.498405 1979335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 14:27:58.509410 1979335 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 14:27:58.509496 1979335 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 14:27:58.527379 1979335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 14:27:58.540010 1979335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:27:58.686057 1979335 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 14:27:58.794688 1979335 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 14:27:58.794763 1979335 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 14:27:58.799968 1979335 start.go:563] Will wait 60s for crictl version
	I0120 14:27:58.800030 1979335 ssh_runner.go:195] Run: which crictl
	I0120 14:27:58.804798 1979335 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 14:27:58.848638 1979335 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 14:27:58.848756 1979335 ssh_runner.go:195] Run: crio --version
	I0120 14:27:58.882213 1979335 ssh_runner.go:195] Run: crio --version
	I0120 14:27:58.918179 1979335 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 14:27:58.919555 1979335 main.go:141] libmachine: (calico-798303) Calling .GetIP
	I0120 14:27:58.922901 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:58.923351 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:27:58.923386 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:27:58.923612 1979335 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0120 14:27:58.928527 1979335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:27:58.943381 1979335 kubeadm.go:883] updating cluster {Name:calico-798303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:calico-798303 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.50.150 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 14:27:58.943513 1979335 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 14:27:58.943584 1979335 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:27:58.982784 1979335 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 14:27:58.982865 1979335 ssh_runner.go:195] Run: which lz4
	I0120 14:27:58.987772 1979335 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 14:27:58.992813 1979335 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 14:27:58.992854 1979335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 14:27:57.705158 1979051 node_ready.go:53] node "kindnet-798303" has status "Ready":"False"
	I0120 14:27:59.706166 1979051 node_ready.go:53] node "kindnet-798303" has status "Ready":"False"
	I0120 14:27:57.506783 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | private KVM network mk-custom-flannel-798303 192.168.39.0/24 created
	I0120 14:27:57.506921 1981052 main.go:141] libmachine: (custom-flannel-798303) setting up store path in /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/custom-flannel-798303 ...
	I0120 14:27:57.507004 1981052 main.go:141] libmachine: (custom-flannel-798303) building disk image from file:///home/jenkins/minikube-integration/20242-1920423/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 14:27:57.507158 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:27:57.507051 1981169 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 14:27:57.507338 1981052 main.go:141] libmachine: (custom-flannel-798303) Downloading /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20242-1920423/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0120 14:27:57.829525 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:27:57.829343 1981169 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/custom-flannel-798303/id_rsa...
	I0120 14:27:57.882900 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:27:57.882716 1981169 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/custom-flannel-798303/custom-flannel-798303.rawdisk...
	I0120 14:27:57.882943 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | Writing magic tar header
	I0120 14:27:57.882962 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | Writing SSH key tar header
	I0120 14:27:57.882977 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:27:57.882841 1981169 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/custom-flannel-798303 ...
	I0120 14:27:57.882992 1981052 main.go:141] libmachine: (custom-flannel-798303) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/custom-flannel-798303 (perms=drwx------)
	I0120 14:27:57.883011 1981052 main.go:141] libmachine: (custom-flannel-798303) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423/.minikube/machines (perms=drwxr-xr-x)
	I0120 14:27:57.883023 1981052 main.go:141] libmachine: (custom-flannel-798303) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423/.minikube (perms=drwxr-xr-x)
	I0120 14:27:57.883086 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/custom-flannel-798303
	I0120 14:27:57.883122 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines
	I0120 14:27:57.883135 1981052 main.go:141] libmachine: (custom-flannel-798303) setting executable bit set on /home/jenkins/minikube-integration/20242-1920423 (perms=drwxrwxr-x)
	I0120 14:27:57.883154 1981052 main.go:141] libmachine: (custom-flannel-798303) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0120 14:27:57.883171 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 14:27:57.883179 1981052 main.go:141] libmachine: (custom-flannel-798303) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0120 14:27:57.883192 1981052 main.go:141] libmachine: (custom-flannel-798303) creating domain...
	I0120 14:27:57.883209 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20242-1920423
	I0120 14:27:57.883222 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0120 14:27:57.883240 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | checking permissions on dir: /home/jenkins
	I0120 14:27:57.883270 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | checking permissions on dir: /home
	I0120 14:27:57.883286 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | skipping /home - not owner
	I0120 14:27:57.884423 1981052 main.go:141] libmachine: (custom-flannel-798303) define libvirt domain using xml: 
	I0120 14:27:57.884449 1981052 main.go:141] libmachine: (custom-flannel-798303) <domain type='kvm'>
	I0120 14:27:57.884458 1981052 main.go:141] libmachine: (custom-flannel-798303)   <name>custom-flannel-798303</name>
	I0120 14:27:57.884465 1981052 main.go:141] libmachine: (custom-flannel-798303)   <memory unit='MiB'>3072</memory>
	I0120 14:27:57.884473 1981052 main.go:141] libmachine: (custom-flannel-798303)   <vcpu>2</vcpu>
	I0120 14:27:57.884480 1981052 main.go:141] libmachine: (custom-flannel-798303)   <features>
	I0120 14:27:57.884490 1981052 main.go:141] libmachine: (custom-flannel-798303)     <acpi/>
	I0120 14:27:57.884500 1981052 main.go:141] libmachine: (custom-flannel-798303)     <apic/>
	I0120 14:27:57.884521 1981052 main.go:141] libmachine: (custom-flannel-798303)     <pae/>
	I0120 14:27:57.884532 1981052 main.go:141] libmachine: (custom-flannel-798303)     
	I0120 14:27:57.884541 1981052 main.go:141] libmachine: (custom-flannel-798303)   </features>
	I0120 14:27:57.884555 1981052 main.go:141] libmachine: (custom-flannel-798303)   <cpu mode='host-passthrough'>
	I0120 14:27:57.884567 1981052 main.go:141] libmachine: (custom-flannel-798303)   
	I0120 14:27:57.884575 1981052 main.go:141] libmachine: (custom-flannel-798303)   </cpu>
	I0120 14:27:57.884587 1981052 main.go:141] libmachine: (custom-flannel-798303)   <os>
	I0120 14:27:57.884595 1981052 main.go:141] libmachine: (custom-flannel-798303)     <type>hvm</type>
	I0120 14:27:57.884605 1981052 main.go:141] libmachine: (custom-flannel-798303)     <boot dev='cdrom'/>
	I0120 14:27:57.884615 1981052 main.go:141] libmachine: (custom-flannel-798303)     <boot dev='hd'/>
	I0120 14:27:57.884716 1981052 main.go:141] libmachine: (custom-flannel-798303)     <bootmenu enable='no'/>
	I0120 14:27:57.884761 1981052 main.go:141] libmachine: (custom-flannel-798303)   </os>
	I0120 14:27:57.884776 1981052 main.go:141] libmachine: (custom-flannel-798303)   <devices>
	I0120 14:27:57.884789 1981052 main.go:141] libmachine: (custom-flannel-798303)     <disk type='file' device='cdrom'>
	I0120 14:27:57.884819 1981052 main.go:141] libmachine: (custom-flannel-798303)       <source file='/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/custom-flannel-798303/boot2docker.iso'/>
	I0120 14:27:57.884840 1981052 main.go:141] libmachine: (custom-flannel-798303)       <target dev='hdc' bus='scsi'/>
	I0120 14:27:57.884854 1981052 main.go:141] libmachine: (custom-flannel-798303)       <readonly/>
	I0120 14:27:57.884864 1981052 main.go:141] libmachine: (custom-flannel-798303)     </disk>
	I0120 14:27:57.884875 1981052 main.go:141] libmachine: (custom-flannel-798303)     <disk type='file' device='disk'>
	I0120 14:27:57.884888 1981052 main.go:141] libmachine: (custom-flannel-798303)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0120 14:27:57.884906 1981052 main.go:141] libmachine: (custom-flannel-798303)       <source file='/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/custom-flannel-798303/custom-flannel-798303.rawdisk'/>
	I0120 14:27:57.884918 1981052 main.go:141] libmachine: (custom-flannel-798303)       <target dev='hda' bus='virtio'/>
	I0120 14:27:57.884950 1981052 main.go:141] libmachine: (custom-flannel-798303)     </disk>
	I0120 14:27:57.884979 1981052 main.go:141] libmachine: (custom-flannel-798303)     <interface type='network'>
	I0120 14:27:57.884994 1981052 main.go:141] libmachine: (custom-flannel-798303)       <source network='mk-custom-flannel-798303'/>
	I0120 14:27:57.885014 1981052 main.go:141] libmachine: (custom-flannel-798303)       <model type='virtio'/>
	I0120 14:27:57.885024 1981052 main.go:141] libmachine: (custom-flannel-798303)     </interface>
	I0120 14:27:57.885040 1981052 main.go:141] libmachine: (custom-flannel-798303)     <interface type='network'>
	I0120 14:27:57.885054 1981052 main.go:141] libmachine: (custom-flannel-798303)       <source network='default'/>
	I0120 14:27:57.885065 1981052 main.go:141] libmachine: (custom-flannel-798303)       <model type='virtio'/>
	I0120 14:27:57.885074 1981052 main.go:141] libmachine: (custom-flannel-798303)     </interface>
	I0120 14:27:57.885084 1981052 main.go:141] libmachine: (custom-flannel-798303)     <serial type='pty'>
	I0120 14:27:57.885116 1981052 main.go:141] libmachine: (custom-flannel-798303)       <target port='0'/>
	I0120 14:27:57.885154 1981052 main.go:141] libmachine: (custom-flannel-798303)     </serial>
	I0120 14:27:57.885174 1981052 main.go:141] libmachine: (custom-flannel-798303)     <console type='pty'>
	I0120 14:27:57.885192 1981052 main.go:141] libmachine: (custom-flannel-798303)       <target type='serial' port='0'/>
	I0120 14:27:57.885205 1981052 main.go:141] libmachine: (custom-flannel-798303)     </console>
	I0120 14:27:57.885216 1981052 main.go:141] libmachine: (custom-flannel-798303)     <rng model='virtio'>
	I0120 14:27:57.885225 1981052 main.go:141] libmachine: (custom-flannel-798303)       <backend model='random'>/dev/random</backend>
	I0120 14:27:57.885235 1981052 main.go:141] libmachine: (custom-flannel-798303)     </rng>
	I0120 14:27:57.885243 1981052 main.go:141] libmachine: (custom-flannel-798303)     
	I0120 14:27:57.885252 1981052 main.go:141] libmachine: (custom-flannel-798303)     
	I0120 14:27:57.885261 1981052 main.go:141] libmachine: (custom-flannel-798303)   </devices>
	I0120 14:27:57.885276 1981052 main.go:141] libmachine: (custom-flannel-798303) </domain>
	I0120 14:27:57.885290 1981052 main.go:141] libmachine: (custom-flannel-798303) 
	I0120 14:27:57.889454 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:e4:c2:e6 in network default
	I0120 14:27:57.890103 1981052 main.go:141] libmachine: (custom-flannel-798303) starting domain...
	I0120 14:27:57.890123 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:27:57.890132 1981052 main.go:141] libmachine: (custom-flannel-798303) ensuring networks are active...
	I0120 14:27:57.890829 1981052 main.go:141] libmachine: (custom-flannel-798303) Ensuring network default is active
	I0120 14:27:57.891185 1981052 main.go:141] libmachine: (custom-flannel-798303) Ensuring network mk-custom-flannel-798303 is active
	I0120 14:27:57.891785 1981052 main.go:141] libmachine: (custom-flannel-798303) getting domain XML...
	I0120 14:27:57.892704 1981052 main.go:141] libmachine: (custom-flannel-798303) creating domain...
	I0120 14:27:59.282227 1981052 main.go:141] libmachine: (custom-flannel-798303) waiting for IP...
	I0120 14:27:59.283263 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:27:59.283891 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find current IP address of domain custom-flannel-798303 in network mk-custom-flannel-798303
	I0120 14:27:59.283920 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:27:59.283875 1981169 retry.go:31] will retry after 281.540563ms: waiting for domain to come up
	I0120 14:27:59.567588 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:27:59.568351 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find current IP address of domain custom-flannel-798303 in network mk-custom-flannel-798303
	I0120 14:27:59.568444 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:27:59.568348 1981169 retry.go:31] will retry after 269.360934ms: waiting for domain to come up
	I0120 14:27:59.839747 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:27:59.840443 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find current IP address of domain custom-flannel-798303 in network mk-custom-flannel-798303
	I0120 14:27:59.840466 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:27:59.840423 1981169 retry.go:31] will retry after 384.037883ms: waiting for domain to come up
	I0120 14:28:00.225756 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:28:00.226504 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find current IP address of domain custom-flannel-798303 in network mk-custom-flannel-798303
	I0120 14:28:00.226531 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:28:00.226457 1981169 retry.go:31] will retry after 439.430142ms: waiting for domain to come up
	I0120 14:28:00.667366 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:28:00.668043 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find current IP address of domain custom-flannel-798303 in network mk-custom-flannel-798303
	I0120 14:28:00.668087 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:28:00.668032 1981169 retry.go:31] will retry after 580.281019ms: waiting for domain to come up
	I0120 14:28:01.249478 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:28:01.250055 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find current IP address of domain custom-flannel-798303 in network mk-custom-flannel-798303
	I0120 14:28:01.250095 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:28:01.250024 1981169 retry.go:31] will retry after 742.725042ms: waiting for domain to come up
	I0120 14:28:01.994664 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:28:01.995220 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find current IP address of domain custom-flannel-798303 in network mk-custom-flannel-798303
	I0120 14:28:01.995249 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:28:01.995190 1981169 retry.go:31] will retry after 1.180543176s: waiting for domain to come up
	I0120 14:28:00.632189 1979335 crio.go:462] duration metric: took 1.64445395s to copy over tarball
	I0120 14:28:00.632310 1979335 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 14:28:03.242675 1979335 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.610322417s)
	I0120 14:28:03.242714 1979335 crio.go:469] duration metric: took 2.610484287s to extract the tarball
	I0120 14:28:03.242724 1979335 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 14:28:03.283231 1979335 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:28:03.331082 1979335 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 14:28:03.331114 1979335 cache_images.go:84] Images are preloaded, skipping loading
	I0120 14:28:03.331123 1979335 kubeadm.go:934] updating node { 192.168.50.150 8443 v1.32.0 crio true true} ...
	I0120 14:28:03.331257 1979335 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-798303 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:calico-798303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0120 14:28:03.331348 1979335 ssh_runner.go:195] Run: crio config
	I0120 14:28:03.382876 1979335 cni.go:84] Creating CNI manager for "calico"
	I0120 14:28:03.382918 1979335 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 14:28:03.382981 1979335 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.150 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-798303 NodeName:calico-798303 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 14:28:03.383188 1979335 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-798303"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.150"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.150"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 14:28:03.383304 1979335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 14:28:03.394455 1979335 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 14:28:03.394551 1979335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 14:28:03.404868 1979335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0120 14:28:03.422941 1979335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 14:28:03.444034 1979335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0120 14:28:03.463052 1979335 ssh_runner.go:195] Run: grep 192.168.50.150	control-plane.minikube.internal$ /etc/hosts
	I0120 14:28:03.467320 1979335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:28:03.481529 1979335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:28:03.626519 1979335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:28:03.646206 1979335 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303 for IP: 192.168.50.150
	I0120 14:28:03.646252 1979335 certs.go:194] generating shared ca certs ...
	I0120 14:28:03.646276 1979335 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:28:03.646487 1979335 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 14:28:03.646543 1979335 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 14:28:03.646557 1979335 certs.go:256] generating profile certs ...
	I0120 14:28:03.646675 1979335 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/client.key
	I0120 14:28:03.646697 1979335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/client.crt with IP's: []
	I0120 14:28:03.777538 1979335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/client.crt ...
	I0120 14:28:03.777576 1979335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/client.crt: {Name:mk057820a00318dd728f498fd3a611505dbf7879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:28:03.777769 1979335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/client.key ...
	I0120 14:28:03.777780 1979335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/client.key: {Name:mk4d471abdbe48c999134ab5d72d13a9f7734be3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:28:03.777857 1979335 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/apiserver.key.d3cba062
	I0120 14:28:03.777873 1979335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/apiserver.crt.d3cba062 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.150]
	I0120 14:28:03.934445 1979335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/apiserver.crt.d3cba062 ...
	I0120 14:28:03.934482 1979335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/apiserver.crt.d3cba062: {Name:mkdf34495c04f0a580c8a04d0ce65d3134564a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:28:03.934676 1979335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/apiserver.key.d3cba062 ...
	I0120 14:28:03.934694 1979335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/apiserver.key.d3cba062: {Name:mk0d0a140dbca60d86a4061f9793f7737326a6ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:28:03.934779 1979335 certs.go:381] copying /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/apiserver.crt.d3cba062 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/apiserver.crt
	I0120 14:28:03.934851 1979335 certs.go:385] copying /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/apiserver.key.d3cba062 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/apiserver.key
	I0120 14:28:03.934903 1979335 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/proxy-client.key
	I0120 14:28:03.934920 1979335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/proxy-client.crt with IP's: []
	I0120 14:28:04.176683 1979335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/proxy-client.crt ...
	I0120 14:28:04.176720 1979335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/proxy-client.crt: {Name:mkef74ac22a247842bb1a09b53ed0fdcfb7dac3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:28:04.176887 1979335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/proxy-client.key ...
	I0120 14:28:04.176901 1979335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/proxy-client.key: {Name:mk22ab09aa22e61609beb9df0467702939c1bc98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:28:04.177073 1979335 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem (1338 bytes)
	W0120 14:28:04.177110 1979335 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672_empty.pem, impossibly tiny 0 bytes
	I0120 14:28:04.177124 1979335 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 14:28:04.177163 1979335 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 14:28:04.177202 1979335 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 14:28:04.177223 1979335 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 14:28:04.177263 1979335 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:28:04.177881 1979335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 14:28:04.210628 1979335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 14:28:04.238038 1979335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 14:28:04.265728 1979335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 14:28:04.292707 1979335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0120 14:28:04.324188 1979335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 14:28:04.378591 1979335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 14:28:04.411873 1979335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/calico-798303/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0120 14:28:04.442357 1979335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem --> /usr/share/ca-certificates/1927672.pem (1338 bytes)
	I0120 14:28:04.474495 1979335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /usr/share/ca-certificates/19276722.pem (1708 bytes)
	I0120 14:28:04.504967 1979335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 14:28:04.531442 1979335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 14:28:04.554582 1979335 ssh_runner.go:195] Run: openssl version
	I0120 14:28:04.562008 1979335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 14:28:04.575028 1979335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:28:04.580301 1979335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:28:04.580389 1979335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:28:04.586908 1979335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 14:28:04.598993 1979335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1927672.pem && ln -fs /usr/share/ca-certificates/1927672.pem /etc/ssl/certs/1927672.pem"
	I0120 14:28:04.610985 1979335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1927672.pem
	I0120 14:28:04.615845 1979335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:58 /usr/share/ca-certificates/1927672.pem
	I0120 14:28:04.615916 1979335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1927672.pem
	I0120 14:28:04.622026 1979335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1927672.pem /etc/ssl/certs/51391683.0"
	I0120 14:28:04.635088 1979335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19276722.pem && ln -fs /usr/share/ca-certificates/19276722.pem /etc/ssl/certs/19276722.pem"
	I0120 14:28:04.647105 1979335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19276722.pem
	I0120 14:28:04.652076 1979335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:58 /usr/share/ca-certificates/19276722.pem
	I0120 14:28:04.652138 1979335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19276722.pem
	I0120 14:28:04.658001 1979335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19276722.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 14:28:04.669100 1979335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 14:28:04.673690 1979335 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0120 14:28:04.673761 1979335 kubeadm.go:392] StartCluster: {Name:calico-798303 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:calico-798303 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.50.150 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:28:04.673855 1979335 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 14:28:04.673922 1979335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:28:04.721893 1979335 cri.go:89] found id: ""
	I0120 14:28:04.721968 1979335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 14:28:04.732766 1979335 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:28:04.744490 1979335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:28:04.758822 1979335 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:28:04.758849 1979335 kubeadm.go:157] found existing configuration files:
	
	I0120 14:28:04.758903 1979335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:28:04.769254 1979335 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:28:04.769332 1979335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:28:04.779909 1979335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:28:04.790475 1979335 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:28:04.790561 1979335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:28:04.800862 1979335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:28:04.810776 1979335 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:28:04.810867 1979335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:28:04.823321 1979335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:28:04.834116 1979335 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:28:04.834199 1979335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:28:04.844976 1979335 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:28:04.916866 1979335 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 14:28:04.917008 1979335 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:28:05.054498 1979335 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:28:05.054690 1979335 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:28:05.054833 1979335 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 14:28:05.064117 1979335 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:28:02.206066 1979051 node_ready.go:53] node "kindnet-798303" has status "Ready":"False"
	I0120 14:28:04.705197 1979051 node_ready.go:53] node "kindnet-798303" has status "Ready":"False"
	I0120 14:28:03.177274 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:28:03.177810 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find current IP address of domain custom-flannel-798303 in network mk-custom-flannel-798303
	I0120 14:28:03.177844 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:28:03.177788 1981169 retry.go:31] will retry after 1.198513604s: waiting for domain to come up
	I0120 14:28:04.377773 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:28:04.378342 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find current IP address of domain custom-flannel-798303 in network mk-custom-flannel-798303
	I0120 14:28:04.378375 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:28:04.378287 1981169 retry.go:31] will retry after 1.313662215s: waiting for domain to come up
	I0120 14:28:05.694088 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:28:05.694718 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find current IP address of domain custom-flannel-798303 in network mk-custom-flannel-798303
	I0120 14:28:05.694769 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:28:05.694680 1981169 retry.go:31] will retry after 1.877186406s: waiting for domain to come up
	I0120 14:28:05.222562 1979335 out.go:235]   - Generating certificates and keys ...
	I0120 14:28:05.222734 1979335 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:28:05.222835 1979335 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:28:05.301553 1979335 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0120 14:28:05.468688 1979335 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0120 14:28:05.652926 1979335 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0120 14:28:05.781259 1979335 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0120 14:28:05.954591 1979335 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0120 14:28:05.954814 1979335 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-798303 localhost] and IPs [192.168.50.150 127.0.0.1 ::1]
	I0120 14:28:06.240015 1979335 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0120 14:28:06.240299 1979335 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-798303 localhost] and IPs [192.168.50.150 127.0.0.1 ::1]
	I0120 14:28:06.430662 1979335 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0120 14:28:06.681047 1979335 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0120 14:28:06.912394 1979335 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0120 14:28:06.912900 1979335 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:28:07.332178 1979335 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:28:07.471272 1979335 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 14:28:07.913946 1979335 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:28:08.251544 1979335 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:28:08.404300 1979335 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:28:08.405336 1979335 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:28:08.409023 1979335 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:28:08.410860 1979335 out.go:235]   - Booting up control plane ...
	I0120 14:28:08.411001 1979335 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:28:08.411149 1979335 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:28:08.412079 1979335 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:28:08.435051 1979335 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:28:08.446133 1979335 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:28:08.446232 1979335 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:28:08.607686 1979335 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 14:28:08.607837 1979335 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 14:28:09.609578 1979335 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002620876s
	I0120 14:28:09.609795 1979335 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 14:28:06.171014 1979051 node_ready.go:49] node "kindnet-798303" has status "Ready":"True"
	I0120 14:28:06.171050 1979051 node_ready.go:38] duration metric: took 12.47040283s for node "kindnet-798303" to be "Ready" ...
	I0120 14:28:06.171067 1979051 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:28:06.181691 1979051 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-tn2zz" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:08.189874 1979051 pod_ready.go:103] pod "coredns-668d6bf9bc-tn2zz" in "kube-system" namespace has status "Ready":"False"
	I0120 14:28:08.694500 1979051 pod_ready.go:93] pod "coredns-668d6bf9bc-tn2zz" in "kube-system" namespace has status "Ready":"True"
	I0120 14:28:08.694534 1979051 pod_ready.go:82] duration metric: took 2.512807953s for pod "coredns-668d6bf9bc-tn2zz" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:08.694549 1979051 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-798303" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:08.704917 1979051 pod_ready.go:93] pod "etcd-kindnet-798303" in "kube-system" namespace has status "Ready":"True"
	I0120 14:28:08.704956 1979051 pod_ready.go:82] duration metric: took 10.398136ms for pod "etcd-kindnet-798303" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:08.704982 1979051 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-798303" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:08.712906 1979051 pod_ready.go:93] pod "kube-apiserver-kindnet-798303" in "kube-system" namespace has status "Ready":"True"
	I0120 14:28:08.712944 1979051 pod_ready.go:82] duration metric: took 7.945418ms for pod "kube-apiserver-kindnet-798303" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:08.712958 1979051 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-798303" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:08.717638 1979051 pod_ready.go:93] pod "kube-controller-manager-kindnet-798303" in "kube-system" namespace has status "Ready":"True"
	I0120 14:28:08.717669 1979051 pod_ready.go:82] duration metric: took 4.702315ms for pod "kube-controller-manager-kindnet-798303" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:08.717682 1979051 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-8z429" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:08.722440 1979051 pod_ready.go:93] pod "kube-proxy-8z429" in "kube-system" namespace has status "Ready":"True"
	I0120 14:28:08.722470 1979051 pod_ready.go:82] duration metric: took 4.778738ms for pod "kube-proxy-8z429" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:08.722483 1979051 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-798303" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:09.086914 1979051 pod_ready.go:93] pod "kube-scheduler-kindnet-798303" in "kube-system" namespace has status "Ready":"True"
	I0120 14:28:09.086946 1979051 pod_ready.go:82] duration metric: took 364.453493ms for pod "kube-scheduler-kindnet-798303" in "kube-system" namespace to be "Ready" ...
	I0120 14:28:09.086960 1979051 pod_ready.go:39] duration metric: took 2.915847546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:28:09.086982 1979051 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:28:09.087047 1979051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:28:09.104907 1979051 api_server.go:72] duration metric: took 15.777931093s to wait for apiserver process to appear ...
	I0120 14:28:09.104944 1979051 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:28:09.104971 1979051 api_server.go:253] Checking apiserver healthz at https://192.168.61.127:8443/healthz ...
	I0120 14:28:09.112646 1979051 api_server.go:279] https://192.168.61.127:8443/healthz returned 200:
	ok
	I0120 14:28:09.113890 1979051 api_server.go:141] control plane version: v1.32.0
	I0120 14:28:09.113921 1979051 api_server.go:131] duration metric: took 8.968223ms to wait for apiserver health ...
	I0120 14:28:09.113933 1979051 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:28:09.291519 1979051 system_pods.go:59] 8 kube-system pods found
	I0120 14:28:09.291570 1979051 system_pods.go:61] "coredns-668d6bf9bc-tn2zz" [043b1d8d-d883-4bf7-8cd8-8989c28c165c] Running
	I0120 14:28:09.291578 1979051 system_pods.go:61] "etcd-kindnet-798303" [58e2794a-933f-4960-94aa-65c9f32e515a] Running
	I0120 14:28:09.291583 1979051 system_pods.go:61] "kindnet-xb9ph" [cb45af23-b7cd-40db-89cf-c5e0ee99c675] Running
	I0120 14:28:09.291588 1979051 system_pods.go:61] "kube-apiserver-kindnet-798303" [932dcd3f-6e1b-4ab4-847c-2eb1de115cc1] Running
	I0120 14:28:09.291593 1979051 system_pods.go:61] "kube-controller-manager-kindnet-798303" [bb7a04ab-56ff-4f71-8ab3-577d5f8c13c6] Running
	I0120 14:28:09.291597 1979051 system_pods.go:61] "kube-proxy-8z429" [f2ae0e16-bd03-41f5-b388-799d3aa89e70] Running
	I0120 14:28:09.291602 1979051 system_pods.go:61] "kube-scheduler-kindnet-798303" [8e85ee9b-ed88-46e5-8ff1-5265fab4fc67] Running
	I0120 14:28:09.291607 1979051 system_pods.go:61] "storage-provisioner" [10daa708-e3d2-42be-b9ff-013e55c90313] Running
	I0120 14:28:09.291617 1979051 system_pods.go:74] duration metric: took 177.675466ms to wait for pod list to return data ...
	I0120 14:28:09.291627 1979051 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:28:09.487311 1979051 default_sa.go:45] found service account: "default"
	I0120 14:28:09.487346 1979051 default_sa.go:55] duration metric: took 195.712348ms for default service account to be created ...
	I0120 14:28:09.487358 1979051 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:28:09.689881 1979051 system_pods.go:87] 8 kube-system pods found
	I0120 14:28:09.889172 1979051 system_pods.go:105] "coredns-668d6bf9bc-tn2zz" [043b1d8d-d883-4bf7-8cd8-8989c28c165c] Running
	I0120 14:28:09.889204 1979051 system_pods.go:105] "etcd-kindnet-798303" [58e2794a-933f-4960-94aa-65c9f32e515a] Running
	I0120 14:28:09.889212 1979051 system_pods.go:105] "kindnet-xb9ph" [cb45af23-b7cd-40db-89cf-c5e0ee99c675] Running
	I0120 14:28:09.889219 1979051 system_pods.go:105] "kube-apiserver-kindnet-798303" [932dcd3f-6e1b-4ab4-847c-2eb1de115cc1] Running
	I0120 14:28:09.889226 1979051 system_pods.go:105] "kube-controller-manager-kindnet-798303" [bb7a04ab-56ff-4f71-8ab3-577d5f8c13c6] Running
	I0120 14:28:09.889233 1979051 system_pods.go:105] "kube-proxy-8z429" [f2ae0e16-bd03-41f5-b388-799d3aa89e70] Running
	I0120 14:28:09.889240 1979051 system_pods.go:105] "kube-scheduler-kindnet-798303" [8e85ee9b-ed88-46e5-8ff1-5265fab4fc67] Running
	I0120 14:28:09.889247 1979051 system_pods.go:105] "storage-provisioner" [10daa708-e3d2-42be-b9ff-013e55c90313] Running
	I0120 14:28:09.889259 1979051 system_pods.go:147] duration metric: took 401.892448ms to wait for k8s-apps to be running ...
	I0120 14:28:09.889277 1979051 system_svc.go:44] waiting for kubelet service to be running ....
	I0120 14:28:09.889352 1979051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:28:09.911419 1979051 system_svc.go:56] duration metric: took 22.137705ms WaitForService to wait for kubelet
	I0120 14:28:09.911465 1979051 kubeadm.go:582] duration metric: took 16.584496919s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 14:28:09.911491 1979051 node_conditions.go:102] verifying NodePressure condition ...
	I0120 14:28:10.088036 1979051 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 14:28:10.088088 1979051 node_conditions.go:123] node cpu capacity is 2
	I0120 14:28:10.088107 1979051 node_conditions.go:105] duration metric: took 176.609163ms to run NodePressure ...
	I0120 14:28:10.088124 1979051 start.go:241] waiting for startup goroutines ...
	I0120 14:28:10.088133 1979051 start.go:246] waiting for cluster config update ...
	I0120 14:28:10.088189 1979051 start.go:255] writing updated cluster config ...
	I0120 14:28:10.088591 1979051 ssh_runner.go:195] Run: rm -f paused
	I0120 14:28:10.165007 1979051 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
	I0120 14:28:10.167807 1979051 out.go:177] * Done! kubectl is now configured to use "kindnet-798303" cluster and "default" namespace by default
	I0120 14:28:07.573654 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:28:07.574206 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find current IP address of domain custom-flannel-798303 in network mk-custom-flannel-798303
	I0120 14:28:07.574234 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:28:07.574193 1981169 retry.go:31] will retry after 2.656419388s: waiting for domain to come up
	I0120 14:28:10.232004 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:28:10.232622 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find current IP address of domain custom-flannel-798303 in network mk-custom-flannel-798303
	I0120 14:28:10.232651 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:28:10.232601 1981169 retry.go:31] will retry after 2.558753467s: waiting for domain to come up
	I0120 14:28:14.609094 1979335 kubeadm.go:310] [api-check] The API server is healthy after 5.00201385s
	I0120 14:28:14.627299 1979335 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 14:28:14.654205 1979335 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 14:28:14.685384 1979335 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 14:28:14.685604 1979335 kubeadm.go:310] [mark-control-plane] Marking the node calico-798303 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 14:28:14.698012 1979335 kubeadm.go:310] [bootstrap-token] Using token: fnqxwk.od4prtcefo7trcim
	I0120 14:28:14.700492 1979335 out.go:235]   - Configuring RBAC rules ...
	I0120 14:28:14.700678 1979335 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 14:28:14.706000 1979335 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 14:28:14.720326 1979335 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 14:28:14.725645 1979335 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 14:28:14.731073 1979335 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 14:28:14.739080 1979335 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 14:28:15.017543 1979335 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 14:28:15.480407 1979335 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 14:28:16.016436 1979335 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 14:28:16.017318 1979335 kubeadm.go:310] 
	I0120 14:28:16.017388 1979335 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 14:28:16.017417 1979335 kubeadm.go:310] 
	I0120 14:28:16.017607 1979335 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 14:28:16.017631 1979335 kubeadm.go:310] 
	I0120 14:28:16.017665 1979335 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 14:28:16.017769 1979335 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 14:28:16.017835 1979335 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 14:28:16.017840 1979335 kubeadm.go:310] 
	I0120 14:28:16.017889 1979335 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 14:28:16.017893 1979335 kubeadm.go:310] 
	I0120 14:28:16.017946 1979335 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 14:28:16.017953 1979335 kubeadm.go:310] 
	I0120 14:28:16.018027 1979335 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 14:28:16.018157 1979335 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 14:28:16.018271 1979335 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 14:28:16.018286 1979335 kubeadm.go:310] 
	I0120 14:28:16.018403 1979335 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 14:28:16.018502 1979335 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 14:28:16.018513 1979335 kubeadm.go:310] 
	I0120 14:28:16.018692 1979335 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fnqxwk.od4prtcefo7trcim \
	I0120 14:28:16.018852 1979335 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 \
	I0120 14:28:16.018897 1979335 kubeadm.go:310] 	--control-plane 
	I0120 14:28:16.018907 1979335 kubeadm.go:310] 
	I0120 14:28:16.019046 1979335 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 14:28:16.019068 1979335 kubeadm.go:310] 
	I0120 14:28:16.019196 1979335 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fnqxwk.od4prtcefo7trcim \
	I0120 14:28:16.019348 1979335 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 
	I0120 14:28:16.020038 1979335 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:28:16.020065 1979335 cni.go:84] Creating CNI manager for "calico"
	I0120 14:28:16.021921 1979335 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0120 14:28:12.793099 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:28:12.793624 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find current IP address of domain custom-flannel-798303 in network mk-custom-flannel-798303
	I0120 14:28:12.793684 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:28:12.793616 1981169 retry.go:31] will retry after 3.312493549s: waiting for domain to come up
	I0120 14:28:16.107937 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:28:16.108458 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find current IP address of domain custom-flannel-798303 in network mk-custom-flannel-798303
	I0120 14:28:16.108554 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | I0120 14:28:16.108464 1981169 retry.go:31] will retry after 3.752881022s: waiting for domain to come up
	I0120 14:28:16.024128 1979335 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
	I0120 14:28:16.024154 1979335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (323422 bytes)
	I0120 14:28:16.056157 1979335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0120 14:28:17.827927 1979335 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.771712107s)
	I0120 14:28:17.828006 1979335 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:28:17.828095 1979335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:28:17.828152 1979335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-798303 minikube.k8s.io/updated_at=2025_01_20T14_28_17_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=calico-798303 minikube.k8s.io/primary=true
	I0120 14:28:17.956508 1979335 ops.go:34] apiserver oom_adj: -16
	I0120 14:28:17.957117 1979335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:28:18.457557 1979335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:28:18.958086 1979335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:28:19.457338 1979335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:28:19.957651 1979335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:28:20.458000 1979335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:28:20.719545 1979335 kubeadm.go:1113] duration metric: took 2.891520404s to wait for elevateKubeSystemPrivileges
	I0120 14:28:20.719597 1979335 kubeadm.go:394] duration metric: took 16.045842489s to StartCluster
	I0120 14:28:20.719623 1979335 settings.go:142] acquiring lock: {Name:mka51395421b81550a6a78e164d8aa21a9ede347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:28:20.719725 1979335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:28:20.721688 1979335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:28:20.722029 1979335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0120 14:28:20.722055 1979335 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:28:20.722017 1979335 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.150 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:28:20.722154 1979335 addons.go:69] Setting default-storageclass=true in profile "calico-798303"
	I0120 14:28:20.722173 1979335 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-798303"
	I0120 14:28:20.722143 1979335 addons.go:69] Setting storage-provisioner=true in profile "calico-798303"
	I0120 14:28:20.722320 1979335 config.go:182] Loaded profile config "calico-798303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:28:20.722337 1979335 addons.go:238] Setting addon storage-provisioner=true in "calico-798303"
	I0120 14:28:20.722376 1979335 host.go:66] Checking if "calico-798303" exists ...
	I0120 14:28:20.722712 1979335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:28:20.722758 1979335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:28:20.722857 1979335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:28:20.722879 1979335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:28:20.723936 1979335 out.go:177] * Verifying Kubernetes components...
	I0120 14:28:20.725376 1979335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:28:20.745158 1979335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39355
	I0120 14:28:20.745479 1979335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38927
	I0120 14:28:20.746069 1979335 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:28:20.746269 1979335 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:28:20.746864 1979335 main.go:141] libmachine: Using API Version  1
	I0120 14:28:20.746888 1979335 main.go:141] libmachine: Using API Version  1
	I0120 14:28:20.746892 1979335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:28:20.746902 1979335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:28:20.747360 1979335 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:28:20.747675 1979335 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:28:20.747898 1979335 main.go:141] libmachine: (calico-798303) Calling .GetState
	I0120 14:28:20.748020 1979335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:28:20.748060 1979335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:28:20.752480 1979335 addons.go:238] Setting addon default-storageclass=true in "calico-798303"
	I0120 14:28:20.752538 1979335 host.go:66] Checking if "calico-798303" exists ...
	I0120 14:28:20.752951 1979335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:28:20.753008 1979335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:28:20.769539 1979335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33439
	I0120 14:28:20.770109 1979335 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:28:20.770675 1979335 main.go:141] libmachine: Using API Version  1
	I0120 14:28:20.770703 1979335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:28:20.771098 1979335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0120 14:28:20.771117 1979335 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:28:20.771323 1979335 main.go:141] libmachine: (calico-798303) Calling .GetState
	I0120 14:28:20.771526 1979335 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:28:20.772033 1979335 main.go:141] libmachine: Using API Version  1
	I0120 14:28:20.772051 1979335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:28:20.772505 1979335 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:28:20.773201 1979335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:28:20.773244 1979335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:28:20.773490 1979335 main.go:141] libmachine: (calico-798303) Calling .DriverName
	I0120 14:28:20.775217 1979335 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:28:20.776730 1979335 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:28:20.776753 1979335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:28:20.776775 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHHostname
	I0120 14:28:20.780512 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:28:20.781024 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:28:20.781049 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:28:20.781350 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHPort
	I0120 14:28:20.781530 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:28:20.781672 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHUsername
	I0120 14:28:20.781833 1979335 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/calico-798303/id_rsa Username:docker}
	I0120 14:28:20.790460 1979335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45395
	I0120 14:28:20.790999 1979335 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:28:20.791666 1979335 main.go:141] libmachine: Using API Version  1
	I0120 14:28:20.791687 1979335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:28:20.791976 1979335 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:28:20.792203 1979335 main.go:141] libmachine: (calico-798303) Calling .GetState
	I0120 14:28:20.794127 1979335 main.go:141] libmachine: (calico-798303) Calling .DriverName
	I0120 14:28:20.794422 1979335 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:28:20.794449 1979335 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:28:20.794471 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHHostname
	I0120 14:28:20.797327 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:28:20.797686 1979335 main.go:141] libmachine: (calico-798303) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ca:f7", ip: ""} in network mk-calico-798303: {Iface:virbr2 ExpiryTime:2025-01-20 15:27:47 +0000 UTC Type:0 Mac:52:54:00:09:ca:f7 Iaid: IPaddr:192.168.50.150 Prefix:24 Hostname:calico-798303 Clientid:01:52:54:00:09:ca:f7}
	I0120 14:28:20.797707 1979335 main.go:141] libmachine: (calico-798303) DBG | domain calico-798303 has defined IP address 192.168.50.150 and MAC address 52:54:00:09:ca:f7 in network mk-calico-798303
	I0120 14:28:20.797906 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHPort
	I0120 14:28:20.798113 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHKeyPath
	I0120 14:28:20.798226 1979335 main.go:141] libmachine: (calico-798303) Calling .GetSSHUsername
	I0120 14:28:20.798402 1979335 sshutil.go:53] new ssh client: &{IP:192.168.50.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/calico-798303/id_rsa Username:docker}
	I0120 14:28:21.079053 1979335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:28:21.102353 1979335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:28:21.154463 1979335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:28:21.154488 1979335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0120 14:28:21.476917 1979335 main.go:141] libmachine: Making call to close driver server
	I0120 14:28:21.476946 1979335 main.go:141] libmachine: (calico-798303) Calling .Close
	I0120 14:28:21.477236 1979335 main.go:141] libmachine: (calico-798303) DBG | Closing plugin on server side
	I0120 14:28:21.477247 1979335 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:28:21.477258 1979335 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:28:21.477267 1979335 main.go:141] libmachine: Making call to close driver server
	I0120 14:28:21.477292 1979335 main.go:141] libmachine: (calico-798303) Calling .Close
	I0120 14:28:21.477579 1979335 main.go:141] libmachine: (calico-798303) DBG | Closing plugin on server side
	I0120 14:28:21.477608 1979335 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:28:21.477632 1979335 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:28:21.493133 1979335 main.go:141] libmachine: Making call to close driver server
	I0120 14:28:21.493166 1979335 main.go:141] libmachine: (calico-798303) Calling .Close
	I0120 14:28:21.493462 1979335 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:28:21.493536 1979335 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:28:21.493477 1979335 main.go:141] libmachine: (calico-798303) DBG | Closing plugin on server side
	I0120 14:28:21.866012 1979335 main.go:141] libmachine: Making call to close driver server
	I0120 14:28:21.866044 1979335 main.go:141] libmachine: (calico-798303) Calling .Close
	I0120 14:28:21.866198 1979335 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0120 14:28:21.866355 1979335 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:28:21.866369 1979335 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:28:21.866377 1979335 main.go:141] libmachine: Making call to close driver server
	I0120 14:28:21.866385 1979335 main.go:141] libmachine: (calico-798303) Calling .Close
	I0120 14:28:21.866646 1979335 main.go:141] libmachine: (calico-798303) DBG | Closing plugin on server side
	I0120 14:28:21.866692 1979335 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:28:21.866705 1979335 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:28:21.867721 1979335 node_ready.go:35] waiting up to 15m0s for node "calico-798303" to be "Ready" ...
	I0120 14:28:21.868363 1979335 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0120 14:28:19.863919 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:28:19.864540 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has current primary IP address 192.168.39.160 and MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:28:19.864563 1981052 main.go:141] libmachine: (custom-flannel-798303) found domain IP: 192.168.39.160
	I0120 14:28:19.864575 1981052 main.go:141] libmachine: (custom-flannel-798303) reserving static IP address...
	I0120 14:28:19.864959 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find host DHCP lease matching {name: "custom-flannel-798303", mac: "52:54:00:28:ab:7b", ip: "192.168.39.160"} in network mk-custom-flannel-798303
	I0120 14:28:19.949460 1981052 main.go:141] libmachine: (custom-flannel-798303) reserved static IP address 192.168.39.160 for domain custom-flannel-798303
	I0120 14:28:19.949488 1981052 main.go:141] libmachine: (custom-flannel-798303) waiting for SSH...
	I0120 14:28:19.949508 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | Getting to WaitForSSH function...
	I0120 14:28:19.953722 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | domain custom-flannel-798303 has defined MAC address 52:54:00:28:ab:7b in network mk-custom-flannel-798303
	I0120 14:28:19.954302 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:28:ab:7b", ip: ""} in network mk-custom-flannel-798303
	I0120 14:28:19.954326 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | unable to find defined IP address of network mk-custom-flannel-798303 interface with MAC address 52:54:00:28:ab:7b
	I0120 14:28:19.954494 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | Using SSH client type: external
	I0120 14:28:19.954526 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/custom-flannel-798303/id_rsa (-rw-------)
	I0120 14:28:19.954682 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/custom-flannel-798303/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 14:28:19.954714 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | About to run SSH command:
	I0120 14:28:19.954728 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | exit 0
	I0120 14:28:19.959874 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | SSH cmd err, output: exit status 255: 
	I0120 14:28:19.959906 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0120 14:28:19.959918 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | command : exit 0
	I0120 14:28:19.959927 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | err     : exit status 255
	I0120 14:28:19.959938 1981052 main.go:141] libmachine: (custom-flannel-798303) DBG | output  : 
	I0120 14:28:21.869610 1979335 addons.go:514] duration metric: took 1.147557733s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0120 14:28:22.371857 1979335 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-798303" context rescaled to 1 replicas
	I0120 14:28:23.871594 1979335 node_ready.go:53] node "calico-798303" has status "Ready":"False"
	
	
	==> CRI-O <==
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.735598014Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f28039e-027a-4a2c-849d-2a9f14145ac5 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.738984222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=554a78a7-d27f-4d7d-badb-46aa3695aceb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.739997338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383306739948383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=554a78a7-d27f-4d7d-badb-46aa3695aceb name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.740896878Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a13cb5f-6d55-4d0a-b261-d4abdb2fd6f0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.741002263Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a13cb5f-6d55-4d0a-b261-d4abdb2fd6f0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.741470344Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:887a79c86ca65e22731762a9aecfe9f0ee9ea922a41c5ab4bd7795e09b3ec81b,PodSandboxId:ed1473b72b4c9e0eaa9aa3afed30540fa87802562ab896bdc58eae610b12fbc8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737383010181934041,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-dgr9v,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 729664d2-e1f8-4eda-8930-de4e9782cd41,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c027d62bee6ac9b99d4c493209eb64bf2ab1ba4009f6e2bfb82901c2fd86fa64,PodSandboxId:8dfc5a66eeb69141c38946fa4c579ae7f44d0a08c0122b8b671299f2b6af62aa,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737382049702019324,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-4tkxl,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: daa776e0-8646-4968-89f7-101d7d3863a1,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a79d75f90fbb71e27a896913be9333e655ba33fa0d90ffa5148c3c1c5711e9c,PodSandboxId:34137510695f4893d2a9853ba5beb06030120fd622c4b9eabb577fa01c119e90,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737382042727879818,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f716b6a-f5d2-49a0-a810-e0cdf72a3020,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3fe888aa35ac4e04be02727c67097ef7882a5b380e51712bde0227f37fa153b,PodSandboxId:708585d19020a05e72f32c1bc2caedb0c3b2263b31e7b3c5f947cf4c72e8c865,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737382041924742687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l4rmh,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06f4698d-c393-4f30-b8de-77ade02b575e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:916edc24bcfeaf634de3140757510abe421b1acef31b83836d05de0b42efe91d,PodSandboxId:c2a565d9b36d88ed10ef2ab532f0cf30e95de54092e1c21dfb3efee95fd06ab5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737382041978054382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-v22vm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95644362-4ab9-405f-b433-5b384ab083d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c81ae35d51e23491985ba4cb10eb67d1f7da106a36024f925c9bef2c66707409,PodSandboxId:112af70d13fc371fe69bfa50cad7d27bbae88572f893ba20585ab204e1704cb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737382040754228376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6vtjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d57cfd3b-d6bd-4e61-a606-b2451a3768ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a254d436887a02f85fb4e7314b5fd146d10c529ebd7924c5abd45e37220b9503,PodSandboxId:461fd302338d2373b908819cebdca59cb7e1c30cf1ef1b7593a00120d00ac41c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff113
0c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737382029541274417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd7a5a7c7c13b070eb7176b299598d4,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f23af5a2b9a7a6f3fe33d7554359e0c49b28ec7490b9109bf3e3a1a31999ef2,PodSandboxId:b18707ffaeeb83976a0083b6e4078db2f38436fabdb5c3016f557d53a780bbac,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d95
6c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737382029450480637,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58112060726a48539d2011747e5ac568,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f8c6da9982baf502a8343227fa5b40570c06f6451bfe6db7923def4f8978b46,PodSandboxId:cfb26ea3de8ec326b4c6c2e6af6153c0d12712cd72475aef1453affdc5d72f63,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2
a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737382029496212210,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c39ac23575a17150212f073f5372b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12ef8b8a7b3c7eeff8d96163889beed7bad7001c4d99c83dd19da322ed916535,PodSandboxId:e4f2839d864599bc4ef867da28b6066ace0780248def812236263a15616c638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd
4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737382029452641040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6affec7bfaaed55ea1cdbeffcb002ef6,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467aa97517d1f70890bfc5aa0e40e53d9f484be0e0ef5c2800a1434a1ee3bfec,PodSandboxId:3c97c053ba537ced1896da7574b094370ef44df169d48a1d3d46f438d8e09b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4e
b408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737381740730162477,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6affec7bfaaed55ea1cdbeffcb002ef6,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a13cb5f-6d55-4d0a-b261-d4abdb2fd6f0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.795081256Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0fdc528-2b1d-4d5f-b51f-5b616df47310 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.795214424Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0fdc528-2b1d-4d5f-b51f-5b616df47310 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.796882580Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6e0b8eb-961f-4b4d-8eec-1f34a2316f2e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.797598262Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383306797559924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6e0b8eb-961f-4b4d-8eec-1f34a2316f2e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.798512406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dba87b56-1c70-4e79-ad61-f891beb8bcb3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.798617980Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dba87b56-1c70-4e79-ad61-f891beb8bcb3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.799114708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:887a79c86ca65e22731762a9aecfe9f0ee9ea922a41c5ab4bd7795e09b3ec81b,PodSandboxId:ed1473b72b4c9e0eaa9aa3afed30540fa87802562ab896bdc58eae610b12fbc8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737383010181934041,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-dgr9v,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 729664d2-e1f8-4eda-8930-de4e9782cd41,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c027d62bee6ac9b99d4c493209eb64bf2ab1ba4009f6e2bfb82901c2fd86fa64,PodSandboxId:8dfc5a66eeb69141c38946fa4c579ae7f44d0a08c0122b8b671299f2b6af62aa,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737382049702019324,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-4tkxl,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: daa776e0-8646-4968-89f7-101d7d3863a1,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a79d75f90fbb71e27a896913be9333e655ba33fa0d90ffa5148c3c1c5711e9c,PodSandboxId:34137510695f4893d2a9853ba5beb06030120fd622c4b9eabb577fa01c119e90,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737382042727879818,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f716b6a-f5d2-49a0-a810-e0cdf72a3020,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3fe888aa35ac4e04be02727c67097ef7882a5b380e51712bde0227f37fa153b,PodSandboxId:708585d19020a05e72f32c1bc2caedb0c3b2263b31e7b3c5f947cf4c72e8c865,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737382041924742687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l4rmh,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06f4698d-c393-4f30-b8de-77ade02b575e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:916edc24bcfeaf634de3140757510abe421b1acef31b83836d05de0b42efe91d,PodSandboxId:c2a565d9b36d88ed10ef2ab532f0cf30e95de54092e1c21dfb3efee95fd06ab5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737382041978054382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-v22vm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95644362-4ab9-405f-b433-5b384ab083d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c81ae35d51e23491985ba4cb10eb67d1f7da106a36024f925c9bef2c66707409,PodSandboxId:112af70d13fc371fe69bfa50cad7d27bbae88572f893ba20585ab204e1704cb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737382040754228376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6vtjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d57cfd3b-d6bd-4e61-a606-b2451a3768ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a254d436887a02f85fb4e7314b5fd146d10c529ebd7924c5abd45e37220b9503,PodSandboxId:461fd302338d2373b908819cebdca59cb7e1c30cf1ef1b7593a00120d00ac41c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff113
0c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737382029541274417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd7a5a7c7c13b070eb7176b299598d4,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f23af5a2b9a7a6f3fe33d7554359e0c49b28ec7490b9109bf3e3a1a31999ef2,PodSandboxId:b18707ffaeeb83976a0083b6e4078db2f38436fabdb5c3016f557d53a780bbac,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d95
6c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737382029450480637,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58112060726a48539d2011747e5ac568,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f8c6da9982baf502a8343227fa5b40570c06f6451bfe6db7923def4f8978b46,PodSandboxId:cfb26ea3de8ec326b4c6c2e6af6153c0d12712cd72475aef1453affdc5d72f63,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2
a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737382029496212210,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c39ac23575a17150212f073f5372b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12ef8b8a7b3c7eeff8d96163889beed7bad7001c4d99c83dd19da322ed916535,PodSandboxId:e4f2839d864599bc4ef867da28b6066ace0780248def812236263a15616c638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd
4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737382029452641040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6affec7bfaaed55ea1cdbeffcb002ef6,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467aa97517d1f70890bfc5aa0e40e53d9f484be0e0ef5c2800a1434a1ee3bfec,PodSandboxId:3c97c053ba537ced1896da7574b094370ef44df169d48a1d3d46f438d8e09b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4e
b408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737381740730162477,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6affec7bfaaed55ea1cdbeffcb002ef6,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dba87b56-1c70-4e79-ad61-f891beb8bcb3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.878400973Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cdcb441a-0d76-4ea7-9db4-0ba43fcb6b17 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.878494357Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cdcb441a-0d76-4ea7-9db4-0ba43fcb6b17 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.880872491Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=e3f6a270-1d31-444c-9c39-886d79461565 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.881224271Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ed1473b72b4c9e0eaa9aa3afed30540fa87802562ab896bdc58eae610b12fbc8,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-86c6bf9756-dgr9v,Uid:729664d2-e1f8-4eda-8930-de4e9782cd41,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737382043820292724,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-dgr9v,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 729664d2-e1f8-4eda-8930-de4e9782cd41,k8s-app: dashboard-metrics-scraper,pod-template-hash: 86c6bf9756,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T14:07:23.213535099Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:8dfc5a66eeb69141c38946fa4c579ae7f44d0a08c0
122b8b671299f2b6af62aa,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-7779f9b69b-4tkxl,Uid:daa776e0-8646-4968-89f7-101d7d3863a1,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737382043490278260,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-4tkxl,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: daa776e0-8646-4968-89f7-101d7d3863a1,k8s-app: kubernetes-dashboard,pod-template-hash: 7779f9b69b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T14:07:23.171185018Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:597185899d50e0a974ec2d13c4df0b386fefa6b5dc1b998fad6f8f3f64fb279e,Metadata:&PodSandboxMetadata{Name:metrics-server-f79f97bbb-kp5hl,Uid:190513f9-3e9f-4705-ae23-9481987802f1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737382042195725265,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: metrics-server-f79f97bbb-kp5hl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 190513f9-3e9f-4705-ae23-9481987802f1,k8s-app: metrics-server,pod-template-hash: f79f97bbb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T14:07:21.870298241Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:34137510695f4893d2a9853ba5beb06030120fd622c4b9eabb577fa01c119e90,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0f716b6a-f5d2-49a0-a810-e0cdf72a3020,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737382041933754979,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f716b6a-f5d2-49a0-a810-e0cdf72a3020,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\
":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-01-20T14:07:21.595154925Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:708585d19020a05e72f32c1bc2caedb0c3b2263b31e7b3c5f947cf4c72e8c865,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-l4rmh,Uid:06f4698d-c393-4f30-b8de-77ade02b575e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737382040490886466,Labels:map[string]string{io.kubernetes
.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-l4rmh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06f4698d-c393-4f30-b8de-77ade02b575e,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T14:07:20.178693835Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c2a565d9b36d88ed10ef2ab532f0cf30e95de54092e1c21dfb3efee95fd06ab5,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-v22vm,Uid:95644362-4ab9-405f-b433-5b384ab083d1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737382040445114500,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-v22vm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95644362-4ab9-405f-b433-5b384ab083d1,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T14:07:20.129293114Z,kubernetes.io/config.source: api,},RuntimeHandler:,
},&PodSandbox{Id:112af70d13fc371fe69bfa50cad7d27bbae88572f893ba20585ab204e1704cb4,Metadata:&PodSandboxMetadata{Name:kube-proxy-6vtjs,Uid:d57cfd3b-d6bd-4e61-a606-b2451a3768ca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737382040372602295,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6vtjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d57cfd3b-d6bd-4e61-a606-b2451a3768ca,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-01-20T14:07:20.058077994Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cfb26ea3de8ec326b4c6c2e6af6153c0d12712cd72475aef1453affdc5d72f63,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-727256,Uid:2c39ac23575a17150212f073f5372b4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737382029271166883,Labels:map[string]string{component: kube-controlle
r-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c39ac23575a17150212f073f5372b4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2c39ac23575a17150212f073f5372b4e,kubernetes.io/config.seen: 2025-01-20T14:07:08.801503507Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e4f2839d864599bc4ef867da28b6066ace0780248def812236263a15616c638f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-727256,Uid:6affec7bfaaed55ea1cdbeffcb002ef6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1737382029264710587,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6affec7bfaaed55ea1cdbeffcb002ef6,tier: control-plane,},Annotations:map[string]string{k
ubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.104:8444,kubernetes.io/config.hash: 6affec7bfaaed55ea1cdbeffcb002ef6,kubernetes.io/config.seen: 2025-01-20T14:07:08.801502128Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:461fd302338d2373b908819cebdca59cb7e1c30cf1ef1b7593a00120d00ac41c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-727256,Uid:7fd7a5a7c7c13b070eb7176b299598d4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737382029263706326,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd7a5a7c7c13b070eb7176b299598d4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7fd7a5a7c7c13b070eb7176b299598d4,kubernetes.io/config.seen: 2025-01-20T14:07:08.801504584Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:
b18707ffaeeb83976a0083b6e4078db2f38436fabdb5c3016f557d53a780bbac,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-727256,Uid:58112060726a48539d2011747e5ac568,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1737382029238627418,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58112060726a48539d2011747e5ac568,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.104:2379,kubernetes.io/config.hash: 58112060726a48539d2011747e5ac568,kubernetes.io/config.seen: 2025-01-20T14:07:08.801498033Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3c97c053ba537ced1896da7574b094370ef44df169d48a1d3d46f438d8e09b4f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-727256,Uid:6affec7bfaaed55ea1cdbeffcb002ef6,Namespace:kube-system,Attempt:0,},State:SAN
DBOX_NOTREADY,CreatedAt:1737381740429654594,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6affec7bfaaed55ea1cdbeffcb002ef6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.104:8444,kubernetes.io/config.hash: 6affec7bfaaed55ea1cdbeffcb002ef6,kubernetes.io/config.seen: 2025-01-20T14:02:19.965529903Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e3f6a270-1d31-444c-9c39-886d79461565 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.884108491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87ae69ea-56a5-4181-a734-a8594ea5ef25 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.884218519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87ae69ea-56a5-4181-a734-a8594ea5ef25 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.884608960Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:887a79c86ca65e22731762a9aecfe9f0ee9ea922a41c5ab4bd7795e09b3ec81b,PodSandboxId:ed1473b72b4c9e0eaa9aa3afed30540fa87802562ab896bdc58eae610b12fbc8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737383010181934041,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-dgr9v,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 729664d2-e1f8-4eda-8930-de4e9782cd41,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c027d62bee6ac9b99d4c493209eb64bf2ab1ba4009f6e2bfb82901c2fd86fa64,PodSandboxId:8dfc5a66eeb69141c38946fa4c579ae7f44d0a08c0122b8b671299f2b6af62aa,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737382049702019324,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-4tkxl,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: daa776e0-8646-4968-89f7-101d7d3863a1,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a79d75f90fbb71e27a896913be9333e655ba33fa0d90ffa5148c3c1c5711e9c,PodSandboxId:34137510695f4893d2a9853ba5beb06030120fd622c4b9eabb577fa01c119e90,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737382042727879818,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f716b6a-f5d2-49a0-a810-e0cdf72a3020,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3fe888aa35ac4e04be02727c67097ef7882a5b380e51712bde0227f37fa153b,PodSandboxId:708585d19020a05e72f32c1bc2caedb0c3b2263b31e7b3c5f947cf4c72e8c865,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737382041924742687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l4rmh,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06f4698d-c393-4f30-b8de-77ade02b575e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:916edc24bcfeaf634de3140757510abe421b1acef31b83836d05de0b42efe91d,PodSandboxId:c2a565d9b36d88ed10ef2ab532f0cf30e95de54092e1c21dfb3efee95fd06ab5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737382041978054382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-v22vm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95644362-4ab9-405f-b433-5b384ab083d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c81ae35d51e23491985ba4cb10eb67d1f7da106a36024f925c9bef2c66707409,PodSandboxId:112af70d13fc371fe69bfa50cad7d27bbae88572f893ba20585ab204e1704cb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737382040754228376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6vtjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d57cfd3b-d6bd-4e61-a606-b2451a3768ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a254d436887a02f85fb4e7314b5fd146d10c529ebd7924c5abd45e37220b9503,PodSandboxId:461fd302338d2373b908819cebdca59cb7e1c30cf1ef1b7593a00120d00ac41c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff113
0c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737382029541274417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd7a5a7c7c13b070eb7176b299598d4,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f23af5a2b9a7a6f3fe33d7554359e0c49b28ec7490b9109bf3e3a1a31999ef2,PodSandboxId:b18707ffaeeb83976a0083b6e4078db2f38436fabdb5c3016f557d53a780bbac,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d95
6c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737382029450480637,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58112060726a48539d2011747e5ac568,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f8c6da9982baf502a8343227fa5b40570c06f6451bfe6db7923def4f8978b46,PodSandboxId:cfb26ea3de8ec326b4c6c2e6af6153c0d12712cd72475aef1453affdc5d72f63,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2
a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737382029496212210,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c39ac23575a17150212f073f5372b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12ef8b8a7b3c7eeff8d96163889beed7bad7001c4d99c83dd19da322ed916535,PodSandboxId:e4f2839d864599bc4ef867da28b6066ace0780248def812236263a15616c638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd
4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737382029452641040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6affec7bfaaed55ea1cdbeffcb002ef6,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467aa97517d1f70890bfc5aa0e40e53d9f484be0e0ef5c2800a1434a1ee3bfec,PodSandboxId:3c97c053ba537ced1896da7574b094370ef44df169d48a1d3d46f438d8e09b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4e
b408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737381740730162477,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6affec7bfaaed55ea1cdbeffcb002ef6,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87ae69ea-56a5-4181-a734-a8594ea5ef25 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.885041901Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e54443cc-2895-4bd7-8b72-48a7104450f6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.887773594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383306887658930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e54443cc-2895-4bd7-8b72-48a7104450f6 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.896259028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33e0c3ce-23ae-4619-a15c-4ea2011e7983 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.896459694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33e0c3ce-23ae-4619-a15c-4ea2011e7983 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:28:26 default-k8s-diff-port-727256 crio[727]: time="2025-01-20 14:28:26.896867013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:887a79c86ca65e22731762a9aecfe9f0ee9ea922a41c5ab4bd7795e09b3ec81b,PodSandboxId:ed1473b72b4c9e0eaa9aa3afed30540fa87802562ab896bdc58eae610b12fbc8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1737383010181934041,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-86c6bf9756-dgr9v,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 729664d2-e1f8-4eda-8930-de4e9782cd41,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c027d62bee6ac9b99d4c493209eb64bf2ab1ba4009f6e2bfb82901c2fd86fa64,PodSandboxId:8dfc5a66eeb69141c38946fa4c579ae7f44d0a08c0122b8b671299f2b6af62aa,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1737382049702019324,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-4tkxl,io.kubernetes.pod.namespace: kubernetes-dashboard
,io.kubernetes.pod.uid: daa776e0-8646-4968-89f7-101d7d3863a1,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a79d75f90fbb71e27a896913be9333e655ba33fa0d90ffa5148c3c1c5711e9c,PodSandboxId:34137510695f4893d2a9853ba5beb06030120fd622c4b9eabb577fa01c119e90,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737382042727879818,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod
.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f716b6a-f5d2-49a0-a810-e0cdf72a3020,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3fe888aa35ac4e04be02727c67097ef7882a5b380e51712bde0227f37fa153b,PodSandboxId:708585d19020a05e72f32c1bc2caedb0c3b2263b31e7b3c5f947cf4c72e8c865,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737382041924742687,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-l4rmh,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06f4698d-c393-4f30-b8de-77ade02b575e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:916edc24bcfeaf634de3140757510abe421b1acef31b83836d05de0b42efe91d,PodSandboxId:c2a565d9b36d88ed10ef2ab532f0cf30e95de54092e1c21dfb3efee95fd06ab5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737382041978054382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-v22vm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95644362-4ab9-405f-b433-5b384ab083d1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c81ae35d51e23491985ba4cb10eb67d1f7da106a36024f925c9bef2c66707409,PodSandboxId:112af70d13fc371fe69bfa50cad7d27bbae88572f893ba20585ab204e1704cb4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737382040754228376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6vtjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d57cfd3b-d6bd-4e61-a606-b2451a3768ca,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a254d436887a02f85fb4e7314b5fd146d10c529ebd7924c5abd45e37220b9503,PodSandboxId:461fd302338d2373b908819cebdca59cb7e1c30cf1ef1b7593a00120d00ac41c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff113
0c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737382029541274417,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fd7a5a7c7c13b070eb7176b299598d4,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f23af5a2b9a7a6f3fe33d7554359e0c49b28ec7490b9109bf3e3a1a31999ef2,PodSandboxId:b18707ffaeeb83976a0083b6e4078db2f38436fabdb5c3016f557d53a780bbac,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d95
6c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737382029450480637,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58112060726a48539d2011747e5ac568,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f8c6da9982baf502a8343227fa5b40570c06f6451bfe6db7923def4f8978b46,PodSandboxId:cfb26ea3de8ec326b4c6c2e6af6153c0d12712cd72475aef1453affdc5d72f63,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2
a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737382029496212210,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c39ac23575a17150212f073f5372b4e,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12ef8b8a7b3c7eeff8d96163889beed7bad7001c4d99c83dd19da322ed916535,PodSandboxId:e4f2839d864599bc4ef867da28b6066ace0780248def812236263a15616c638f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd
4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737382029452641040,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6affec7bfaaed55ea1cdbeffcb002ef6,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467aa97517d1f70890bfc5aa0e40e53d9f484be0e0ef5c2800a1434a1ee3bfec,PodSandboxId:3c97c053ba537ced1896da7574b094370ef44df169d48a1d3d46f438d8e09b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4e
b408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1737381740730162477,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-727256,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6affec7bfaaed55ea1cdbeffcb002ef6,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33e0c3ce-23ae-4619-a15c-4ea2011e7983 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	887a79c86ca65       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 minutes ago       Exited              dashboard-metrics-scraper   8                   ed1473b72b4c9       dashboard-metrics-scraper-86c6bf9756-dgr9v
	c027d62bee6ac       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   20 minutes ago      Running             kubernetes-dashboard        0                   8dfc5a66eeb69       kubernetes-dashboard-7779f9b69b-4tkxl
	8a79d75f90fbb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 minutes ago      Running             storage-provisioner         0                   34137510695f4       storage-provisioner
	916edc24bcfea       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   c2a565d9b36d8       coredns-668d6bf9bc-v22vm
	d3fe888aa35ac       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                           21 minutes ago      Running             coredns                     0                   708585d19020a       coredns-668d6bf9bc-l4rmh
	c81ae35d51e23       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08                                           21 minutes ago      Running             kube-proxy                  0                   112af70d13fc3       kube-proxy-6vtjs
	a254d436887a0       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5                                           21 minutes ago      Running             kube-scheduler              2                   461fd302338d2       kube-scheduler-default-k8s-diff-port-727256
	3f8c6da9982ba       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3                                           21 minutes ago      Running             kube-controller-manager     2                   cfb26ea3de8ec       kube-controller-manager-default-k8s-diff-port-727256
	12ef8b8a7b3c7       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                           21 minutes ago      Running             kube-apiserver              2                   e4f2839d86459       kube-apiserver-default-k8s-diff-port-727256
	9f23af5a2b9a7       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                           21 minutes ago      Running             etcd                        2                   b18707ffaeeb8       etcd-default-k8s-diff-port-727256
	467aa97517d1f       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                           26 minutes ago      Exited              kube-apiserver              1                   3c97c053ba537       kube-apiserver-default-k8s-diff-port-727256
	
	
	==> coredns [916edc24bcfeaf634de3140757510abe421b1acef31b83836d05de0b42efe91d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d3fe888aa35ac4e04be02727c67097ef7882a5b380e51712bde0227f37fa153b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-727256
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-727256
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3
	                    minikube.k8s.io/name=default-k8s-diff-port-727256
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_20T14_07_15_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Jan 2025 14:07:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-727256
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Jan 2025 14:28:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Jan 2025 14:26:27 +0000   Mon, 20 Jan 2025 14:07:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Jan 2025 14:26:27 +0000   Mon, 20 Jan 2025 14:07:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Jan 2025 14:26:27 +0000   Mon, 20 Jan 2025 14:07:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Jan 2025 14:26:27 +0000   Mon, 20 Jan 2025 14:07:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.104
	  Hostname:    default-k8s-diff-port-727256
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0d2b64bb874450892b18f33a129e116
	  System UUID:                d0d2b64b-b874-4508-92b1-8f33a129e116
	  Boot ID:                    db51c652-2da7-45d7-8f02-18ff01f7fdad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-l4rmh                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-668d6bf9bc-v22vm                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-727256                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-727256             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-727256    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-6vtjs                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-727256             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-f79f97bbb-kp5hl                          100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         21m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-86c6bf9756-dgr9v              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-4tkxl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-727256 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-727256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-727256 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-727256 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-727256 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-727256 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-727256 event: Registered Node default-k8s-diff-port-727256 in Controller
	
	
	==> dmesg <==
	[  +0.055796] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.385679] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jan20 14:02] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.708903] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.578204] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.066906] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065755] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.197692] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.148027] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +0.292269] systemd-fstab-generator[718]: Ignoring "noauto" option for root device
	[  +5.319347] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.089211] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.805874] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +4.650769] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.090974] kauditd_printk_skb: 85 callbacks suppressed
	[Jan20 14:07] systemd-fstab-generator[2704]: Ignoring "noauto" option for root device
	[  +0.063725] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.501474] systemd-fstab-generator[3048]: Ignoring "noauto" option for root device
	[  +0.095054] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.998593] systemd-fstab-generator[3176]: Ignoring "noauto" option for root device
	[  +0.123916] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.504456] kauditd_printk_skb: 110 callbacks suppressed
	[ +22.647980] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [9f23af5a2b9a7a6f3fe33d7554359e0c49b28ec7490b9109bf3e3a1a31999ef2] <==
	{"level":"info","ts":"2025-01-20T14:22:10.613834Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3684439417,"revision":1131,"compact-revision":880}
	{"level":"warn","ts":"2025-01-20T14:25:58.208904Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.161992ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-01-20T14:25:58.209462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.221132ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T14:25:58.209547Z","caller":"traceutil/trace.go:171","msg":"trace[1769525548] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1583; }","duration":"104.30881ms","start":"2025-01-20T14:25:58.105211Z","end":"2025-01-20T14:25:58.209519Z","steps":["trace[1769525548] 'range keys from in-memory index tree'  (duration: 104.207797ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T14:25:58.209576Z","caller":"traceutil/trace.go:171","msg":"trace[328400424] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1583; }","duration":"183.889134ms","start":"2025-01-20T14:25:58.025597Z","end":"2025-01-20T14:25:58.209486Z","steps":["trace[328400424] 'range keys from in-memory index tree'  (duration: 183.103479ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T14:26:24.573183Z","caller":"traceutil/trace.go:171","msg":"trace[1501133836] transaction","detail":"{read_only:false; response_revision:1603; number_of_response:1; }","duration":"131.575801ms","start":"2025-01-20T14:26:24.441582Z","end":"2025-01-20T14:26:24.573158Z","steps":["trace[1501133836] 'process raft request'  (duration: 62.321604ms)","trace[1501133836] 'compare'  (duration: 69.067008ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T14:26:50.451734Z","caller":"traceutil/trace.go:171","msg":"trace[1408261529] transaction","detail":"{read_only:false; response_revision:1625; number_of_response:1; }","duration":"124.875985ms","start":"2025-01-20T14:26:50.326830Z","end":"2025-01-20T14:26:50.451706Z","steps":["trace[1408261529] 'process raft request'  (duration: 124.39521ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T14:27:10.618060Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1391}
	{"level":"info","ts":"2025-01-20T14:27:10.624490Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1391,"took":"5.925872ms","hash":3809962984,"current-db-size-bytes":3108864,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1826816,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-01-20T14:27:10.624553Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3809962984,"revision":1391,"compact-revision":1131}
	{"level":"info","ts":"2025-01-20T14:27:38.986908Z","caller":"traceutil/trace.go:171","msg":"trace[934240832] transaction","detail":"{read_only:false; response_revision:1666; number_of_response:1; }","duration":"182.744136ms","start":"2025-01-20T14:27:38.804097Z","end":"2025-01-20T14:27:38.986841Z","steps":["trace[934240832] 'process raft request'  (duration: 175.21539ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T14:27:38.987527Z","caller":"traceutil/trace.go:171","msg":"trace[1731152203] linearizableReadLoop","detail":"{readStateIndex:1932; appliedIndex:1931; }","duration":"182.155402ms","start":"2025-01-20T14:27:38.805321Z","end":"2025-01-20T14:27:38.987477Z","steps":["trace[1731152203] 'read index received'  (duration: 173.921613ms)","trace[1731152203] 'applied index is now lower than readState.Index'  (duration: 8.23234ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T14:27:39.068969Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.596653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-01-20T14:27:39.069088Z","caller":"traceutil/trace.go:171","msg":"trace[807930188] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:1666; }","duration":"263.764067ms","start":"2025-01-20T14:27:38.805292Z","end":"2025-01-20T14:27:39.069056Z","steps":["trace[807930188] 'agreement among raft nodes before linearized reading'  (duration: 185.109245ms)","trace[807930188] 'count revisions from in-memory index tree'  (duration: 78.459986ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T14:27:39.069958Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.236127ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T14:27:39.070099Z","caller":"traceutil/trace.go:171","msg":"trace[923595246] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1666; }","duration":"254.410083ms","start":"2025-01-20T14:27:38.815639Z","end":"2025-01-20T14:27:39.070049Z","steps":["trace[923595246] 'agreement among raft nodes before linearized reading'  (duration: 174.802456ms)","trace[923595246] 'range keys from in-memory index tree'  (duration: 79.449548ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T14:28:04.593199Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.648361ms","expected-duration":"100ms","prefix":"","request":"header:<ID:641926241723233181 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.104\" mod_revision:1679 > success:<request_put:<key:\"/registry/masterleases/192.168.72.104\" value_size:67 lease:641926241723233179 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.104\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-01-20T14:28:04.593585Z","caller":"traceutil/trace.go:171","msg":"trace[1154220177] linearizableReadLoop","detail":"{readStateIndex:1959; appliedIndex:1958; }","duration":"179.899446ms","start":"2025-01-20T14:28:04.413661Z","end":"2025-01-20T14:28:04.593560Z","steps":["trace[1154220177] 'read index received'  (duration: 54.463122ms)","trace[1154220177] 'applied index is now lower than readState.Index'  (duration: 125.435212ms)"],"step_count":2}
	{"level":"info","ts":"2025-01-20T14:28:04.593620Z","caller":"traceutil/trace.go:171","msg":"trace[836618391] transaction","detail":"{read_only:false; response_revision:1687; number_of_response:1; }","duration":"251.696708ms","start":"2025-01-20T14:28:04.341908Z","end":"2025-01-20T14:28:04.593605Z","steps":["trace[836618391] 'process raft request'  (duration: 126.265643ms)","trace[836618391] 'compare'  (duration: 123.549065ms)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T14:28:04.593764Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.094913ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T14:28:04.593802Z","caller":"traceutil/trace.go:171","msg":"trace[1656734131] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1687; }","duration":"180.163944ms","start":"2025-01-20T14:28:04.413630Z","end":"2025-01-20T14:28:04.593794Z","steps":["trace[1656734131] 'agreement among raft nodes before linearized reading'  (duration: 180.001901ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T14:28:05.386314Z","caller":"traceutil/trace.go:171","msg":"trace[1954631132] transaction","detail":"{read_only:false; response_revision:1688; number_of_response:1; }","duration":"227.727006ms","start":"2025-01-20T14:28:05.158568Z","end":"2025-01-20T14:28:05.386295Z","steps":["trace[1954631132] 'process raft request'  (duration: 227.610564ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-20T14:28:05.386971Z","caller":"traceutil/trace.go:171","msg":"trace[37984678] linearizableReadLoop","detail":"{readStateIndex:1960; appliedIndex:1960; }","duration":"173.127716ms","start":"2025-01-20T14:28:05.213822Z","end":"2025-01-20T14:28:05.386950Z","steps":["trace[37984678] 'read index received'  (duration: 173.121348ms)","trace[37984678] 'applied index is now lower than readState.Index'  (duration: 5.083µs)"],"step_count":2}
	{"level":"warn","ts":"2025-01-20T14:28:05.387104Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.252126ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-20T14:28:05.387175Z","caller":"traceutil/trace.go:171","msg":"trace[1480351487] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1688; }","duration":"173.370239ms","start":"2025-01-20T14:28:05.213796Z","end":"2025-01-20T14:28:05.387166Z","steps":["trace[1480351487] 'agreement among raft nodes before linearized reading'  (duration: 173.246819ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:28:27 up 26 min,  0 users,  load average: 0.20, 0.14, 0.17
	Linux default-k8s-diff-port-727256 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [12ef8b8a7b3c7eeff8d96163889beed7bad7001c4d99c83dd19da322ed916535] <==
	I0120 14:25:13.430038       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 14:25:13.432011       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 14:27:12.430244       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:27:12.430422       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0120 14:27:13.432307       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:27:13.432473       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0120 14:27:13.432566       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:27:13.432592       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0120 14:27:13.433671       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 14:27:13.433695       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0120 14:28:13.434271       1 handler_proxy.go:99] no RequestInfo found in the context
	W0120 14:28:13.434509       1 handler_proxy.go:99] no RequestInfo found in the context
	E0120 14:28:13.434661       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0120 14:28:13.434703       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0120 14:28:13.435876       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0120 14:28:13.436016       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [467aa97517d1f70890bfc5aa0e40e53d9f484be0e0ef5c2800a1434a1ee3bfec] <==
	W0120 14:07:05.181755       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:05.424180       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:05.507028       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:05.541099       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:05.682326       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:05.709041       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:05.793864       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:05.826411       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:05.859920       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:05.911234       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:05.937793       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:05.971231       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:06.084010       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:06.172463       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:06.224087       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:06.258485       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:06.323287       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:06.467171       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:06.503079       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:06.507699       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:06.551789       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:06.555528       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:06.589790       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:06.676306       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0120 14:07:06.689210       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [3f8c6da9982baf502a8343227fa5b40570c06f6451bfe6db7923def4f8978b46] <==
	I0120 14:23:30.601075       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="64.302µs"
	I0120 14:23:31.181974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="53.923µs"
	I0120 14:23:36.678000       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="43.498µs"
	E0120 14:23:49.227218       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:23:49.368071       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:24:19.234674       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:24:19.375950       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:24:49.241225       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:24:49.383852       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:25:19.248212       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:25:19.391915       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:25:49.255280       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:25:49.402987       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:26:19.262440       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:26:19.412193       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 14:26:27.384916       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-727256"
	E0120 14:26:49.268652       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:26:49.424197       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:27:19.277607       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:27:19.434963       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:27:49.284572       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:27:49.445185       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0120 14:28:19.293150       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0120 14:28:19.454588       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0120 14:28:22.186751       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="149.792µs"
	
	
	==> kube-proxy [c81ae35d51e23491985ba4cb10eb67d1f7da106a36024f925c9bef2c66707409] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0120 14:07:21.354904       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0120 14:07:21.372720       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.104"]
	E0120 14:07:21.372810       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0120 14:07:21.500717       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0120 14:07:21.500757       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0120 14:07:21.500815       1 server_linux.go:170] "Using iptables Proxier"
	I0120 14:07:21.514807       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0120 14:07:21.515121       1 server.go:497] "Version info" version="v1.32.0"
	I0120 14:07:21.515133       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0120 14:07:21.517427       1 config.go:199] "Starting service config controller"
	I0120 14:07:21.517486       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0120 14:07:21.517508       1 config.go:105] "Starting endpoint slice config controller"
	I0120 14:07:21.517511       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0120 14:07:21.518091       1 config.go:329] "Starting node config controller"
	I0120 14:07:21.518098       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0120 14:07:21.620901       1 shared_informer.go:320] Caches are synced for service config
	I0120 14:07:21.620957       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0120 14:07:21.621250       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a254d436887a02f85fb4e7314b5fd146d10c529ebd7924c5abd45e37220b9503] <==
	W0120 14:07:13.289954       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0120 14:07:13.290018       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 14:07:13.304002       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0120 14:07:13.304055       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0120 14:07:13.361274       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0120 14:07:13.361330       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:07:13.399082       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0120 14:07:13.399137       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0120 14:07:13.415655       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0120 14:07:13.415770       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:07:13.501674       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0120 14:07:13.501745       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 14:07:13.571534       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0120 14:07:13.571580       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 14:07:13.691903       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0120 14:07:13.693540       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 14:07:13.705274       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0120 14:07:13.705327       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:07:13.822777       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0120 14:07:13.822909       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0120 14:07:13.830904       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0120 14:07:13.830958       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0120 14:07:13.863413       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0120 14:07:13.863467       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0120 14:07:15.545427       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 20 14:27:55 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:27:55.660953    3055 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383275660581180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:27:55 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:27:55.661266    3055 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383275660581180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:27:56 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:27:56.165129    3055 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-kp5hl" podUID="190513f9-3e9f-4705-ae23-9481987802f1"
	Jan 20 14:27:57 default-k8s-diff-port-727256 kubelet[3055]: I0120 14:27:57.170451    3055 scope.go:117] "RemoveContainer" containerID="887a79c86ca65e22731762a9aecfe9f0ee9ea922a41c5ab4bd7795e09b3ec81b"
	Jan 20 14:27:57 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:27:57.170674    3055 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-dgr9v_kubernetes-dashboard(729664d2-e1f8-4eda-8930-de4e9782cd41)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-dgr9v" podUID="729664d2-e1f8-4eda-8930-de4e9782cd41"
	Jan 20 14:28:05 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:28:05.664102    3055 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383285663547974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:28:05 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:28:05.664714    3055 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383285663547974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:28:09 default-k8s-diff-port-727256 kubelet[3055]: I0120 14:28:09.163958    3055 scope.go:117] "RemoveContainer" containerID="887a79c86ca65e22731762a9aecfe9f0ee9ea922a41c5ab4bd7795e09b3ec81b"
	Jan 20 14:28:09 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:28:09.164597    3055 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-dgr9v_kubernetes-dashboard(729664d2-e1f8-4eda-8930-de4e9782cd41)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-dgr9v" podUID="729664d2-e1f8-4eda-8930-de4e9782cd41"
	Jan 20 14:28:10 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:28:10.186731    3055 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 20 14:28:10 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:28:10.187043    3055 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 20 14:28:10 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:28:10.187905    3055 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79r78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-f79f97bbb-kp5hl_kube-system(190513f9-3e9f-4705-ae23-9481987802f1): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Jan 20 14:28:10 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:28:10.189459    3055 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-kp5hl" podUID="190513f9-3e9f-4705-ae23-9481987802f1"
	Jan 20 14:28:15 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:28:15.198114    3055 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 20 14:28:15 default-k8s-diff-port-727256 kubelet[3055]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 20 14:28:15 default-k8s-diff-port-727256 kubelet[3055]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 20 14:28:15 default-k8s-diff-port-727256 kubelet[3055]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 20 14:28:15 default-k8s-diff-port-727256 kubelet[3055]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 20 14:28:15 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:28:15.669243    3055 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383295668027130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:28:15 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:28:15.669289    3055 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383295668027130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:28:22 default-k8s-diff-port-727256 kubelet[3055]: I0120 14:28:22.163165    3055 scope.go:117] "RemoveContainer" containerID="887a79c86ca65e22731762a9aecfe9f0ee9ea922a41c5ab4bd7795e09b3ec81b"
	Jan 20 14:28:22 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:28:22.163461    3055 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-86c6bf9756-dgr9v_kubernetes-dashboard(729664d2-e1f8-4eda-8930-de4e9782cd41)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756-dgr9v" podUID="729664d2-e1f8-4eda-8930-de4e9782cd41"
	Jan 20 14:28:22 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:28:22.165941    3055 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-kp5hl" podUID="190513f9-3e9f-4705-ae23-9481987802f1"
	Jan 20 14:28:25 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:28:25.672820    3055 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383305672001812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Jan 20 14:28:25 default-k8s-diff-port-727256 kubelet[3055]: E0120 14:28:25.672877    3055 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383305672001812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:185713,},InodesUsed:&UInt64Value{Value:70,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [c027d62bee6ac9b99d4c493209eb64bf2ab1ba4009f6e2bfb82901c2fd86fa64] <==
	2025/01/20 14:16:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:16:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:17:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:17:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:18:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:18:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:19:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:19:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:20:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:20:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:21:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:21:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:22:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:22:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:23:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:23:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:24:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:24:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:25:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:25:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:26:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:26:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:27:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:27:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/01/20 14:28:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8a79d75f90fbb71e27a896913be9333e655ba33fa0d90ffa5148c3c1c5711e9c] <==
	I0120 14:07:22.926894       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0120 14:07:22.970820       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0120 14:07:22.970891       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0120 14:07:22.988204       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0120 14:07:22.989871       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-727256_cae145c1-0ca2-4317-9874-eacf5e66f981!
	I0120 14:07:22.990195       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d4ec99df-7868-4a22-a375-cb6a03016346", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-727256_cae145c1-0ca2-4317-9874-eacf5e66f981 became leader
	I0120 14:07:23.091436       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-727256_cae145c1-0ca2-4317-9874-eacf5e66f981!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-727256 -n default-k8s-diff-port-727256
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-727256 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-kp5hl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-727256 describe pod metrics-server-f79f97bbb-kp5hl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-727256 describe pod metrics-server-f79f97bbb-kp5hl: exit status 1 (86.801281ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-kp5hl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-727256 describe pod metrics-server-f79f97bbb-kp5hl: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1618.96s)
E0120 14:29:49.089380 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:49.095923 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:49.107389 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:49.128899 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:49.170434 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:49.252039 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:49.413579 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:49.735407 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:50.377111 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:51.659193 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:54.220551 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
E0120 14:11:26.556619 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
E0120 14:12:36.597498 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
E0120 14:12:49.630537 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
E0120 14:16:26.557523 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
E0120 14:17:36.597329 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191446 -n old-k8s-version-191446
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191446 -n old-k8s-version-191446: exit status 2 (253.021771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-191446" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191446 -n old-k8s-version-191446
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191446 -n old-k8s-version-191446: exit status 2 (246.374886ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-191446 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-191446 logs -n 25: (1.292821499s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:56 UTC | 20 Jan 25 13:57 UTC |
	| start   | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-038404                              | cert-expiration-038404       | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:57 UTC |
	| start   | -p embed-certs-647109                                  | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-648067             | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-648067                                   | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 13:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-955986 | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 13:58 UTC |
	|         | disable-driver-mounts-955986                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 13:59 UTC |
	|         | default-k8s-diff-port-727256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-647109            | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 13:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-647109                                  | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 14:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-648067                  | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC | 20 Jan 25 13:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-648067                                   | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-191446        | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-727256  | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC | 20 Jan 25 13:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC | 20 Jan 25 14:01 UTC |
	|         | default-k8s-diff-port-727256                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-647109                 | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 14:00 UTC | 20 Jan 25 14:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-647109                                  | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 14:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-191446                              | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC | 20 Jan 25 14:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-191446             | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC | 20 Jan 25 14:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-191446                              | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-727256       | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC | 20 Jan 25 14:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC |                     |
	|         | default-k8s-diff-port-727256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 14:01:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 14:01:30.648649 1971324 out.go:345] Setting OutFile to fd 1 ...
	I0120 14:01:30.648768 1971324 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:01:30.648777 1971324 out.go:358] Setting ErrFile to fd 2...
	I0120 14:01:30.648781 1971324 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:01:30.648971 1971324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 14:01:30.649563 1971324 out.go:352] Setting JSON to false
	I0120 14:01:30.650677 1971324 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":20637,"bootTime":1737361054,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 14:01:30.650808 1971324 start.go:139] virtualization: kvm guest
	I0120 14:01:30.653087 1971324 out.go:177] * [default-k8s-diff-port-727256] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 14:01:30.654902 1971324 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 14:01:30.654958 1971324 notify.go:220] Checking for updates...
	I0120 14:01:30.657200 1971324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 14:01:30.658358 1971324 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:01:30.659540 1971324 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 14:01:30.660755 1971324 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 14:01:30.662124 1971324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 14:01:30.664066 1971324 config.go:182] Loaded profile config "default-k8s-diff-port-727256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:01:30.664694 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:01:30.664783 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:01:30.683363 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46503
	I0120 14:01:30.684660 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:01:30.685421 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:01:30.685453 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:01:30.685849 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:01:30.686136 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:01:30.686482 1971324 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 14:01:30.686962 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:01:30.687017 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:01:30.705214 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33219
	I0120 14:01:30.705778 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:01:30.706464 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:01:30.706496 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:01:30.706910 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:01:30.707413 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:01:30.748140 1971324 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 14:01:30.749542 1971324 start.go:297] selected driver: kvm2
	I0120 14:01:30.749575 1971324 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-727256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8
s-diff-port-727256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:01:30.749732 1971324 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 14:01:30.750471 1971324 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:01:30.750569 1971324 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-1920423/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 14:01:30.769419 1971324 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 14:01:30.769920 1971324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 14:01:30.769962 1971324 cni.go:84] Creating CNI manager for ""
	I0120 14:01:30.770026 1971324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:01:30.770087 1971324 start.go:340] cluster config:
	{Name:default-k8s-diff-port-727256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-727256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:01:30.770203 1971324 iso.go:125] acquiring lock: {Name:mk18c81b0efa5cc8efe9d47dc52684752f206ae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:01:30.772094 1971324 out.go:177] * Starting "default-k8s-diff-port-727256" primary control-plane node in "default-k8s-diff-port-727256" cluster
	I0120 14:01:27.567956 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .Start
	I0120 14:01:27.568241 1971155 main.go:141] libmachine: (old-k8s-version-191446) starting domain...
	I0120 14:01:27.568273 1971155 main.go:141] libmachine: (old-k8s-version-191446) ensuring networks are active...
	I0120 14:01:27.569283 1971155 main.go:141] libmachine: (old-k8s-version-191446) Ensuring network default is active
	I0120 14:01:27.569742 1971155 main.go:141] libmachine: (old-k8s-version-191446) Ensuring network mk-old-k8s-version-191446 is active
	I0120 14:01:27.570107 1971155 main.go:141] libmachine: (old-k8s-version-191446) getting domain XML...
	I0120 14:01:27.570794 1971155 main.go:141] libmachine: (old-k8s-version-191446) creating domain...
	I0120 14:01:28.844259 1971155 main.go:141] libmachine: (old-k8s-version-191446) waiting for IP...
	I0120 14:01:28.845169 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:28.845736 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:28.845869 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:28.845749 1971190 retry.go:31] will retry after 249.093991ms: waiting for domain to come up
	I0120 14:01:29.096266 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:29.096835 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:29.096870 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:29.096778 1971190 retry.go:31] will retry after 285.937419ms: waiting for domain to come up
	I0120 14:01:29.384654 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:29.385227 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:29.385260 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:29.385184 1971190 retry.go:31] will retry after 403.444594ms: waiting for domain to come up
	I0120 14:01:29.789819 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:29.790466 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:29.790516 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:29.790442 1971190 retry.go:31] will retry after 525.904837ms: waiting for domain to come up
	I0120 14:01:30.361342 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:30.361758 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:30.361799 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:30.361742 1971190 retry.go:31] will retry after 498.844656ms: waiting for domain to come up
	I0120 14:01:30.862672 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:30.863328 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:30.863359 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:30.863284 1971190 retry.go:31] will retry after 695.176765ms: waiting for domain to come up
	I0120 14:01:31.559994 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:31.560418 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:31.560483 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:31.560423 1971190 retry.go:31] will retry after 1.138767233s: waiting for domain to come up
	I0120 14:01:29.280435 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:31.281034 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:33.778046 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:32.686925 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:35.185223 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:30.773441 1971324 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 14:01:30.773503 1971324 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 14:01:30.773514 1971324 cache.go:56] Caching tarball of preloaded images
	I0120 14:01:30.773638 1971324 preload.go:172] Found /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 14:01:30.773650 1971324 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 14:01:30.773755 1971324 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/config.json ...
	I0120 14:01:30.774002 1971324 start.go:360] acquireMachinesLock for default-k8s-diff-port-727256: {Name:mkbf148c6fec0c722eed081be3cef9d0990100c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 14:01:32.700822 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:32.701293 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:32.701323 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:32.701238 1971190 retry.go:31] will retry after 1.039348308s: waiting for domain to come up
	I0120 14:01:33.742152 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:33.742798 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:33.742827 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:33.742756 1971190 retry.go:31] will retry after 1.487881975s: waiting for domain to come up
	I0120 14:01:35.232385 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:35.232903 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:35.233000 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:35.232883 1971190 retry.go:31] will retry after 1.541170209s: waiting for domain to come up
	I0120 14:01:36.775877 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:36.776558 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:36.776586 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:36.776513 1971190 retry.go:31] will retry after 2.896053576s: waiting for domain to come up
	I0120 14:01:35.778385 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:37.778939 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:37.187266 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:39.686105 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:39.675363 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:39.675986 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:39.676021 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:39.675945 1971190 retry.go:31] will retry after 3.105341623s: waiting for domain to come up
	I0120 14:01:39.779284 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:42.278570 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:42.185136 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:44.686564 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:42.783450 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:42.783953 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:42.783979 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:42.783919 1971190 retry.go:31] will retry after 3.216558184s: waiting for domain to come up
	I0120 14:01:46.001813 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.002358 1971155 main.go:141] libmachine: (old-k8s-version-191446) found domain IP: 192.168.61.215
	I0120 14:01:46.002386 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has current primary IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.002392 1971155 main.go:141] libmachine: (old-k8s-version-191446) reserving static IP address...
	I0120 14:01:46.002890 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "old-k8s-version-191446", mac: "52:54:00:87:83:fb", ip: "192.168.61.215"} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.002913 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | skip adding static IP to network mk-old-k8s-version-191446 - found existing host DHCP lease matching {name: "old-k8s-version-191446", mac: "52:54:00:87:83:fb", ip: "192.168.61.215"}
	I0120 14:01:46.002961 1971155 main.go:141] libmachine: (old-k8s-version-191446) reserved static IP address 192.168.61.215 for domain old-k8s-version-191446
	I0120 14:01:46.003012 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | Getting to WaitForSSH function...
	I0120 14:01:46.003029 1971155 main.go:141] libmachine: (old-k8s-version-191446) waiting for SSH...
	I0120 14:01:46.005479 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.005822 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.005844 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.005930 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | Using SSH client type: external
	I0120 14:01:46.005974 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa (-rw-------)
	I0120 14:01:46.006012 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 14:01:46.006030 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | About to run SSH command:
	I0120 14:01:46.006042 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | exit 0
	I0120 14:01:46.134861 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | SSH cmd err, output: <nil>: 
	I0120 14:01:46.135287 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetConfigRaw
	I0120 14:01:46.135993 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 14:01:46.138498 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.138913 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.138949 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.139408 1971155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/config.json ...
	I0120 14:01:46.139628 1971155 machine.go:93] provisionDockerMachine start ...
	I0120 14:01:46.139648 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:46.139910 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.142776 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.143168 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.143196 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.143377 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.143551 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.143710 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.143884 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.144084 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:46.144287 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:46.144299 1971155 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 14:01:46.259874 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 14:01:46.259909 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetMachineName
	I0120 14:01:46.260184 1971155 buildroot.go:166] provisioning hostname "old-k8s-version-191446"
	I0120 14:01:46.260218 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetMachineName
	I0120 14:01:46.260442 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.263109 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.263469 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.263498 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.263608 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.263809 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.263964 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.264115 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.264263 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:46.264566 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:46.264598 1971155 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-191446 && echo "old-k8s-version-191446" | sudo tee /etc/hostname
	I0120 14:01:46.390733 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-191446
	
	I0120 14:01:46.390778 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.394086 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.394452 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.394495 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.394665 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.394902 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.395120 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.395312 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.395484 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:46.395721 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:46.395742 1971155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-191446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-191446/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-191446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 14:01:46.517398 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 14:01:46.517429 1971155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 14:01:46.517474 1971155 buildroot.go:174] setting up certificates
	I0120 14:01:46.517489 1971155 provision.go:84] configureAuth start
	I0120 14:01:46.517501 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetMachineName
	I0120 14:01:46.517852 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 14:01:46.520852 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.521243 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.521276 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.521419 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.523721 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.524173 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.524216 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.524323 1971155 provision.go:143] copyHostCerts
	I0120 14:01:46.524385 1971155 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem, removing ...
	I0120 14:01:46.524406 1971155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem
	I0120 14:01:46.524505 1971155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 14:01:46.524641 1971155 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem, removing ...
	I0120 14:01:46.524653 1971155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem
	I0120 14:01:46.524681 1971155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 14:01:46.524749 1971155 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem, removing ...
	I0120 14:01:46.524756 1971155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem
	I0120 14:01:46.524777 1971155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 14:01:46.524823 1971155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-191446 san=[127.0.0.1 192.168.61.215 localhost minikube old-k8s-version-191446]
	I0120 14:01:46.780575 1971155 provision.go:177] copyRemoteCerts
	I0120 14:01:46.780653 1971155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 14:01:46.780692 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.783791 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.784174 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.784204 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.784390 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.784667 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.784947 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.785129 1971155 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 14:01:46.873537 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 14:01:46.906323 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 14:01:46.934595 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 14:01:46.963136 1971155 provision.go:87] duration metric: took 445.630599ms to configureAuth
	I0120 14:01:46.963175 1971155 buildroot.go:189] setting minikube options for container-runtime
	I0120 14:01:46.963391 1971155 config.go:182] Loaded profile config "old-k8s-version-191446": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 14:01:46.963495 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.966539 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.966917 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.966953 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.967102 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.967316 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.967488 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.967694 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.967860 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:46.968110 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:46.968140 1971155 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 14:01:47.221729 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 14:01:47.221758 1971155 machine.go:96] duration metric: took 1.082115997s to provisionDockerMachine
	I0120 14:01:47.221770 1971155 start.go:293] postStartSetup for "old-k8s-version-191446" (driver="kvm2")
	I0120 14:01:47.221780 1971155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 14:01:47.221801 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.222156 1971155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 14:01:47.222213 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:47.225564 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.226024 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.226063 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.226226 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:47.226479 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.226678 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:47.226841 1971155 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 14:01:47.315044 1971155 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 14:01:47.319600 1971155 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 14:01:47.319630 1971155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 14:01:47.319699 1971155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 14:01:47.319785 1971155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem -> 19276722.pem in /etc/ssl/certs
	I0120 14:01:47.319880 1971155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 14:01:47.331251 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:01:47.359102 1971155 start.go:296] duration metric: took 137.311216ms for postStartSetup
	I0120 14:01:47.359156 1971155 fix.go:56] duration metric: took 19.814283548s for fixHost
	I0120 14:01:47.359184 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:47.362176 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.362643 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.362680 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.362916 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:47.363161 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.363352 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.363520 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:47.363693 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:47.363932 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:47.363948 1971155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 14:01:47.480212 1971324 start.go:364] duration metric: took 16.706172443s to acquireMachinesLock for "default-k8s-diff-port-727256"
	I0120 14:01:47.480300 1971324 start.go:96] Skipping create...Using existing machine configuration
	I0120 14:01:47.480313 1971324 fix.go:54] fixHost starting: 
	I0120 14:01:47.480706 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:01:47.480762 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:01:47.499438 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0120 14:01:47.499966 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:01:47.500523 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:01:47.500551 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:01:47.501028 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:01:47.501254 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:01:47.501413 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:01:47.503562 1971324 fix.go:112] recreateIfNeeded on default-k8s-diff-port-727256: state=Stopped err=<nil>
	I0120 14:01:47.503596 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	W0120 14:01:47.503774 1971324 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 14:01:47.505539 1971324 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-727256" ...
	I0120 14:01:44.778211 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:47.279184 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:47.480011 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381707.434903722
	
	I0120 14:01:47.480050 1971155 fix.go:216] guest clock: 1737381707.434903722
	I0120 14:01:47.480061 1971155 fix.go:229] Guest: 2025-01-20 14:01:47.434903722 +0000 UTC Remote: 2025-01-20 14:01:47.359160605 +0000 UTC m=+19.980745135 (delta=75.743117ms)
	I0120 14:01:47.480090 1971155 fix.go:200] guest clock delta is within tolerance: 75.743117ms
	I0120 14:01:47.480098 1971155 start.go:83] releasing machines lock for "old-k8s-version-191446", held for 19.935238773s
	I0120 14:01:47.480132 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.480450 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 14:01:47.483367 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.483792 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.483828 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.483945 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.484435 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.484606 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.484699 1971155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 14:01:47.484761 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:47.484899 1971155 ssh_runner.go:195] Run: cat /version.json
	I0120 14:01:47.484929 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:47.487568 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.487980 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.488011 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.488093 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.488211 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:47.488434 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.488591 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:47.488630 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.488653 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.488741 1971155 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 14:01:47.488862 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:47.489009 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.489153 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:47.489343 1971155 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 14:01:47.608326 1971155 ssh_runner.go:195] Run: systemctl --version
	I0120 14:01:47.614709 1971155 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 14:01:47.772139 1971155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 14:01:47.780427 1971155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 14:01:47.780502 1971155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 14:01:47.798266 1971155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 14:01:47.798304 1971155 start.go:495] detecting cgroup driver to use...
	I0120 14:01:47.798398 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 14:01:47.815867 1971155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 14:01:47.835855 1971155 docker.go:217] disabling cri-docker service (if available) ...
	I0120 14:01:47.835918 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 14:01:47.853481 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 14:01:47.869379 1971155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 14:01:47.988401 1971155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 14:01:48.193315 1971155 docker.go:233] disabling docker service ...
	I0120 14:01:48.193390 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 14:01:48.214201 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 14:01:48.230964 1971155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 14:01:48.377733 1971155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 14:01:48.516198 1971155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 14:01:48.533486 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 14:01:48.557115 1971155 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0120 14:01:48.557197 1971155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:01:48.570080 1971155 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 14:01:48.570162 1971155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:01:48.584225 1971155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:01:48.596995 1971155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:01:48.609663 1971155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 14:01:48.623942 1971155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 14:01:48.637099 1971155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 14:01:48.637171 1971155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 14:01:48.653873 1971155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 14:01:48.666171 1971155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:01:48.807308 1971155 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 14:01:48.914634 1971155 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 14:01:48.914731 1971155 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 14:01:48.920471 1971155 start.go:563] Will wait 60s for crictl version
	I0120 14:01:48.920558 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:48.924644 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 14:01:48.966008 1971155 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 14:01:48.966111 1971155 ssh_runner.go:195] Run: crio --version
	I0120 14:01:48.995639 1971155 ssh_runner.go:195] Run: crio --version
	I0120 14:01:49.031088 1971155 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0120 14:01:47.185914 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:49.187141 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:47.506801 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Start
	I0120 14:01:47.507007 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) starting domain...
	I0120 14:01:47.507037 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) ensuring networks are active...
	I0120 14:01:47.507737 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Ensuring network default is active
	I0120 14:01:47.508168 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Ensuring network mk-default-k8s-diff-port-727256 is active
	I0120 14:01:47.508707 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) getting domain XML...
	I0120 14:01:47.509515 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) creating domain...
	I0120 14:01:48.889668 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) waiting for IP...
	I0120 14:01:48.890857 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:48.891526 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:48.891694 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:48.891527 1971420 retry.go:31] will retry after 199.178216ms: waiting for domain to come up
	I0120 14:01:49.092132 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.092672 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.092706 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:49.092636 1971420 retry.go:31] will retry after 255.633273ms: waiting for domain to come up
	I0120 14:01:49.350430 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.351160 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.351194 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:49.351128 1971420 retry.go:31] will retry after 428.048868ms: waiting for domain to come up
	I0120 14:01:49.781110 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.781882 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.781964 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:49.781864 1971420 retry.go:31] will retry after 580.304151ms: waiting for domain to come up
	I0120 14:01:50.363965 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:50.364533 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:50.364559 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:50.364529 1971420 retry.go:31] will retry after 531.332191ms: waiting for domain to come up
	I0120 14:01:49.032269 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 14:01:49.035945 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:49.036382 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:49.036423 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:49.036733 1971155 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0120 14:01:49.041470 1971155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:01:49.055442 1971155 kubeadm.go:883] updating cluster {Name:old-k8s-version-191446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 14:01:49.055654 1971155 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 14:01:49.055738 1971155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:01:49.111537 1971155 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 14:01:49.111603 1971155 ssh_runner.go:195] Run: which lz4
	I0120 14:01:49.116646 1971155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 14:01:49.121632 1971155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 14:01:49.121670 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0120 14:01:51.019564 1971155 crio.go:462] duration metric: took 1.902969728s to copy over tarball
	I0120 14:01:51.019668 1971155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 14:01:49.280435 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:51.780700 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:51.189623 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:53.687386 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:50.897267 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:50.897845 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:50.897880 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:50.897808 1971420 retry.go:31] will retry after 772.118387ms: waiting for domain to come up
	I0120 14:01:51.671806 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:51.672432 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:51.672466 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:51.672381 1971420 retry.go:31] will retry after 1.060623833s: waiting for domain to come up
	I0120 14:01:52.735398 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:52.735986 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:52.736018 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:52.735943 1971420 retry.go:31] will retry after 1.002731806s: waiting for domain to come up
	I0120 14:01:53.740048 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:53.740702 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:53.740731 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:53.740659 1971420 retry.go:31] will retry after 1.680491712s: waiting for domain to come up
	I0120 14:01:55.423577 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:55.424100 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:55.424135 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:55.424031 1971420 retry.go:31] will retry after 1.794880075s: waiting for domain to come up
	I0120 14:01:54.192207 1971155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.172482213s)
	I0120 14:01:54.192247 1971155 crio.go:469] duration metric: took 3.172642787s to extract the tarball
	I0120 14:01:54.192257 1971155 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 14:01:54.241548 1971155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:01:54.283118 1971155 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 14:01:54.283147 1971155 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 14:01:54.283222 1971155 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:01:54.283246 1971155 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.283292 1971155 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.283311 1971155 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.283369 1971155 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0120 14:01:54.283369 1971155 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.283370 1971155 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.283429 1971155 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.285174 1971155 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:01:54.285194 1971155 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.285222 1971155 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.285232 1971155 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.285484 1971155 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.285533 1971155 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.285551 1971155 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0120 14:01:54.285520 1971155 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.443508 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.451962 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.459320 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.478139 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.482365 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.490130 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.491742 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0120 14:01:54.535842 1971155 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0120 14:01:54.535930 1971155 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.536008 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.556510 1971155 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0120 14:01:54.556563 1971155 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.556617 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.604701 1971155 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0120 14:01:54.604747 1971155 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.604801 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.648817 1971155 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0120 14:01:54.648847 1971155 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0120 14:01:54.648872 1971155 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.648887 1971155 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.648932 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.648951 1971155 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0120 14:01:54.648986 1971155 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.649059 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.648932 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.662210 1971155 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0120 14:01:54.662265 1971155 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0120 14:01:54.662271 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.662303 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.662304 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.662392 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.662403 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.666373 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.666427 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.784739 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:01:54.815550 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.815585 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 14:01:54.815637 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.815650 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.820367 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.820421 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.820459 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:55.000111 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:55.000218 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 14:01:55.013244 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:55.013276 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 14:01:55.013348 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:55.013372 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:55.015126 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:55.144073 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0120 14:01:55.144169 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0120 14:01:55.175966 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 14:01:55.175984 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0120 14:01:55.179810 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0120 14:01:55.179835 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0120 14:01:55.180076 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0120 14:01:55.216565 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0120 14:01:55.216646 1971155 cache_images.go:92] duration metric: took 933.479899ms to LoadCachedImages
	W0120 14:01:55.216768 1971155 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0120 14:01:55.216789 1971155 kubeadm.go:934] updating node { 192.168.61.215 8443 v1.20.0 crio true true} ...
	I0120 14:01:55.216907 1971155 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-191446 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 14:01:55.216973 1971155 ssh_runner.go:195] Run: crio config
	I0120 14:01:55.272348 1971155 cni.go:84] Creating CNI manager for ""
	I0120 14:01:55.272377 1971155 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:01:55.272387 1971155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 14:01:55.272407 1971155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.215 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-191446 NodeName:old-k8s-version-191446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 14:01:55.272581 1971155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-191446"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 14:01:55.272661 1971155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 14:01:55.285452 1971155 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 14:01:55.285532 1971155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 14:01:55.300604 1971155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0120 14:01:55.321434 1971155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 14:01:55.339855 1971155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0120 14:01:55.360605 1971155 ssh_runner.go:195] Run: grep 192.168.61.215	control-plane.minikube.internal$ /etc/hosts
	I0120 14:01:55.364977 1971155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:01:55.380053 1971155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:01:55.499744 1971155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:01:55.518232 1971155 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446 for IP: 192.168.61.215
	I0120 14:01:55.518267 1971155 certs.go:194] generating shared ca certs ...
	I0120 14:01:55.518300 1971155 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:01:55.518512 1971155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 14:01:55.518553 1971155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 14:01:55.518563 1971155 certs.go:256] generating profile certs ...
	I0120 14:01:55.571153 1971155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.key
	I0120 14:01:55.571288 1971155 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.key.d5f4b946
	I0120 14:01:55.571350 1971155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.key
	I0120 14:01:55.571517 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem (1338 bytes)
	W0120 14:01:55.571559 1971155 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672_empty.pem, impossibly tiny 0 bytes
	I0120 14:01:55.571570 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 14:01:55.571606 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 14:01:55.571641 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 14:01:55.571671 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 14:01:55.571733 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:01:55.572624 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 14:01:55.613349 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 14:01:55.645837 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 14:01:55.688637 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 14:01:55.736949 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 14:01:55.786459 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 14:01:55.833912 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 14:01:55.861615 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 14:01:55.891303 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem --> /usr/share/ca-certificates/1927672.pem (1338 bytes)
	I0120 14:01:55.920272 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /usr/share/ca-certificates/19276722.pem (1708 bytes)
	I0120 14:01:55.947553 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 14:01:55.979159 1971155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 14:01:56.002476 1971155 ssh_runner.go:195] Run: openssl version
	I0120 14:01:56.011075 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19276722.pem && ln -fs /usr/share/ca-certificates/19276722.pem /etc/ssl/certs/19276722.pem"
	I0120 14:01:56.026823 1971155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19276722.pem
	I0120 14:01:56.033320 1971155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:58 /usr/share/ca-certificates/19276722.pem
	I0120 14:01:56.033404 1971155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19276722.pem
	I0120 14:01:56.041787 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19276722.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 14:01:56.055968 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 14:01:56.072918 1971155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:01:56.078642 1971155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:01:56.078744 1971155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:01:56.085416 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 14:01:56.101948 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1927672.pem && ln -fs /usr/share/ca-certificates/1927672.pem /etc/ssl/certs/1927672.pem"
	I0120 14:01:56.117742 1971155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1927672.pem
	I0120 14:01:56.123020 1971155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:58 /usr/share/ca-certificates/1927672.pem
	I0120 14:01:56.123086 1971155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1927672.pem
	I0120 14:01:56.129661 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1927672.pem /etc/ssl/certs/51391683.0"
	I0120 14:01:56.142113 1971155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 14:01:56.147841 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 14:01:56.154627 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 14:01:56.161139 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 14:01:56.167754 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 14:01:56.174520 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 14:01:56.181204 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 14:01:56.187656 1971155 kubeadm.go:392] StartCluster: {Name:old-k8s-version-191446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:01:56.187767 1971155 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 14:01:56.187860 1971155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:01:56.233626 1971155 cri.go:89] found id: ""
	I0120 14:01:56.233718 1971155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 14:01:56.245027 1971155 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 14:01:56.245062 1971155 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 14:01:56.245126 1971155 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 14:01:56.258403 1971155 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 14:01:56.259211 1971155 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-191446" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:01:56.259525 1971155 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-1920423/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-191446" cluster setting kubeconfig missing "old-k8s-version-191446" context setting]
	I0120 14:01:56.260060 1971155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:01:56.288258 1971155 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 14:01:56.302812 1971155 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.215
	I0120 14:01:56.302855 1971155 kubeadm.go:1160] stopping kube-system containers ...
	I0120 14:01:56.302872 1971155 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 14:01:56.302942 1971155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:01:56.343694 1971155 cri.go:89] found id: ""
	I0120 14:01:56.343794 1971155 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 14:01:56.364228 1971155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:01:56.375163 1971155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:01:56.375187 1971155 kubeadm.go:157] found existing configuration files:
	
	I0120 14:01:56.375260 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:01:56.386527 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:01:56.386622 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:01:56.398715 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:01:56.410031 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:01:56.410112 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:01:56.420983 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:01:56.433109 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:01:56.433192 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:01:56.447385 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:01:56.460977 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:01:56.461066 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:01:56.472124 1971155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:01:56.484344 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:56.617563 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:57.344622 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:54.280536 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:56.779010 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:58.779726 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:55.714950 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:58.186438 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:57.220139 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:57.220694 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:57.220723 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:57.220656 1971420 retry.go:31] will retry after 2.261913004s: waiting for domain to come up
	I0120 14:01:59.484214 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:59.484791 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:59.484820 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:59.484718 1971420 retry.go:31] will retry after 2.630282337s: waiting for domain to come up
	I0120 14:01:57.621080 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:57.732306 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:57.856823 1971155 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:01:57.856931 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:01:58.357005 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:01:58.857625 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:01:59.358085 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:01:59.857398 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:00.357930 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:00.857134 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:01.357106 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:01.857163 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:02.357462 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:01.278692 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:03.777558 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:00.689940 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:03.185114 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:02.116624 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:02.117129 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:02:02.117163 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:02:02.117089 1971420 retry.go:31] will retry after 3.120909651s: waiting for domain to come up
	I0120 14:02:05.239389 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:05.239901 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:02:05.239953 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:02:05.239877 1971420 retry.go:31] will retry after 4.391800801s: waiting for domain to come up
	I0120 14:02:02.857734 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:03.357569 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:03.857955 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:04.357274 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:04.857819 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:05.357138 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:05.857025 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:06.357050 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:06.857399 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:07.357029 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:05.777988 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:08.278483 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:05.188225 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:07.685349 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:10.186075 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:09.634193 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.634637 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has current primary IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.634659 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) found domain IP: 192.168.72.104
	I0120 14:02:09.634684 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) reserving static IP address...
	I0120 14:02:09.635059 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) reserved static IP address 192.168.72.104 for domain default-k8s-diff-port-727256
	I0120 14:02:09.635098 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-727256", mac: "52:54:00:59:90:f7", ip: "192.168.72.104"} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.635109 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) waiting for SSH...
	I0120 14:02:09.635133 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | skip adding static IP to network mk-default-k8s-diff-port-727256 - found existing host DHCP lease matching {name: "default-k8s-diff-port-727256", mac: "52:54:00:59:90:f7", ip: "192.168.72.104"}
	I0120 14:02:09.635148 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Getting to WaitForSSH function...
	I0120 14:02:09.637199 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.637520 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.637554 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.637664 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Using SSH client type: external
	I0120 14:02:09.637695 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa (-rw-------)
	I0120 14:02:09.637761 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 14:02:09.637785 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | About to run SSH command:
	I0120 14:02:09.637834 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | exit 0
	I0120 14:02:09.763002 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | SSH cmd err, output: <nil>: 
	I0120 14:02:09.763410 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetConfigRaw
	I0120 14:02:09.764140 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetIP
	I0120 14:02:09.766862 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.767255 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.767309 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.767547 1971324 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/config.json ...
	I0120 14:02:09.767747 1971324 machine.go:93] provisionDockerMachine start ...
	I0120 14:02:09.767768 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:09.768084 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:09.770642 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.770978 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.771008 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.771159 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:09.771355 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:09.771522 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:09.771651 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:09.771843 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:09.772116 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:09.772135 1971324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 14:02:09.887277 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 14:02:09.887306 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetMachineName
	I0120 14:02:09.887607 1971324 buildroot.go:166] provisioning hostname "default-k8s-diff-port-727256"
	I0120 14:02:09.887644 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetMachineName
	I0120 14:02:09.887855 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:09.890533 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.890940 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.890972 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.891158 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:09.891363 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:09.891514 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:09.891625 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:09.891766 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:09.891982 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:09.891996 1971324 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-727256 && echo "default-k8s-diff-port-727256" | sudo tee /etc/hostname
	I0120 14:02:10.015326 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-727256
	
	I0120 14:02:10.015358 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:10.018488 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.018889 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.018920 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.019174 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:10.019397 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.019591 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.019775 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:10.019935 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:10.020121 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:10.020141 1971324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-727256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-727256/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-727256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 14:02:10.136552 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 14:02:10.136593 1971324 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 14:02:10.136631 1971324 buildroot.go:174] setting up certificates
	I0120 14:02:10.136653 1971324 provision.go:84] configureAuth start
	I0120 14:02:10.136667 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetMachineName
	I0120 14:02:10.137020 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetIP
	I0120 14:02:10.140046 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.140577 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.140627 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.140766 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:10.143806 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.144185 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.144220 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.144340 1971324 provision.go:143] copyHostCerts
	I0120 14:02:10.144408 1971324 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem, removing ...
	I0120 14:02:10.144433 1971324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem
	I0120 14:02:10.144518 1971324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 14:02:10.144663 1971324 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem, removing ...
	I0120 14:02:10.144675 1971324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem
	I0120 14:02:10.144716 1971324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 14:02:10.144827 1971324 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem, removing ...
	I0120 14:02:10.144838 1971324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem
	I0120 14:02:10.144865 1971324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 14:02:10.144958 1971324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-727256 san=[127.0.0.1 192.168.72.104 default-k8s-diff-port-727256 localhost minikube]
	I0120 14:02:07.857904 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:08.357419 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:08.857241 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:09.357914 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:09.857010 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:10.357118 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:10.857037 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:11.357243 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:11.857017 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:12.357401 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:10.704568 1971324 provision.go:177] copyRemoteCerts
	I0120 14:02:10.704642 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 14:02:10.704670 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:10.707581 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.707968 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.708005 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.708165 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:10.708406 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.708556 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:10.708705 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:02:10.798392 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 14:02:10.825489 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0120 14:02:10.851203 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 14:02:10.877144 1971324 provision.go:87] duration metric: took 740.469356ms to configureAuth
	I0120 14:02:10.877184 1971324 buildroot.go:189] setting minikube options for container-runtime
	I0120 14:02:10.877372 1971324 config.go:182] Loaded profile config "default-k8s-diff-port-727256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:02:10.877454 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:10.880681 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.881100 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.881135 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.881295 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:10.881487 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.881661 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.881824 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:10.881986 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:10.882152 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:10.882168 1971324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 14:02:11.118214 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 14:02:11.118246 1971324 machine.go:96] duration metric: took 1.350483814s to provisionDockerMachine
	I0120 14:02:11.118262 1971324 start.go:293] postStartSetup for "default-k8s-diff-port-727256" (driver="kvm2")
	I0120 14:02:11.118274 1971324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 14:02:11.118291 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.118662 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 14:02:11.118706 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:11.121765 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.122132 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.122160 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.122325 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:11.122539 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.122849 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:11.123019 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:02:11.205783 1971324 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 14:02:11.211240 1971324 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 14:02:11.211282 1971324 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 14:02:11.211389 1971324 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 14:02:11.211524 1971324 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem -> 19276722.pem in /etc/ssl/certs
	I0120 14:02:11.211679 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 14:02:11.222226 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:02:11.248964 1971324 start.go:296] duration metric: took 130.683064ms for postStartSetup
	I0120 14:02:11.249013 1971324 fix.go:56] duration metric: took 23.768701383s for fixHost
	I0120 14:02:11.249043 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:11.252350 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.252735 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.252784 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.253016 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:11.253244 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.253451 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.253587 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:11.253769 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:11.254003 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:11.254018 1971324 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 14:02:11.360027 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381731.321642168
	
	I0120 14:02:11.360058 1971324 fix.go:216] guest clock: 1737381731.321642168
	I0120 14:02:11.360067 1971324 fix.go:229] Guest: 2025-01-20 14:02:11.321642168 +0000 UTC Remote: 2025-01-20 14:02:11.249019145 +0000 UTC m=+40.644950772 (delta=72.623023ms)
	I0120 14:02:11.360095 1971324 fix.go:200] guest clock delta is within tolerance: 72.623023ms
	I0120 14:02:11.360110 1971324 start.go:83] releasing machines lock for "default-k8s-diff-port-727256", held for 23.8798308s
	I0120 14:02:11.360147 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.360474 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetIP
	I0120 14:02:11.363630 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.364131 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.364160 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.364441 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.365063 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.365248 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.365348 1971324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 14:02:11.365404 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:11.365419 1971324 ssh_runner.go:195] Run: cat /version.json
	I0120 14:02:11.365439 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:11.368411 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.368839 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.368879 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.368903 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.369109 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:11.369341 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.369383 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.369421 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.369557 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:11.369661 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:11.369746 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:02:11.369900 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.370094 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:11.370254 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:02:11.448584 1971324 ssh_runner.go:195] Run: systemctl --version
	I0120 14:02:11.476726 1971324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 14:02:11.630047 1971324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 14:02:11.636964 1971324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 14:02:11.637055 1971324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 14:02:11.654243 1971324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 14:02:11.654288 1971324 start.go:495] detecting cgroup driver to use...
	I0120 14:02:11.654363 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 14:02:11.671320 1971324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 14:02:11.687866 1971324 docker.go:217] disabling cri-docker service (if available) ...
	I0120 14:02:11.687931 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 14:02:11.703932 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 14:02:11.718827 1971324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 14:02:11.847210 1971324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 14:02:12.007623 1971324 docker.go:233] disabling docker service ...
	I0120 14:02:12.007698 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 14:02:12.024946 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 14:02:12.039357 1971324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 14:02:12.198785 1971324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 14:02:12.318653 1971324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 14:02:12.335226 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 14:02:12.356118 1971324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 14:02:12.356185 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.368853 1971324 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 14:02:12.368928 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.382590 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.395155 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.407707 1971324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 14:02:12.420260 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.432650 1971324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.451911 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.463708 1971324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 14:02:12.474047 1971324 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 14:02:12.474171 1971324 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 14:02:12.487873 1971324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 14:02:12.498585 1971324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:02:12.613685 1971324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 14:02:12.729768 1971324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 14:02:12.729875 1971324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 14:02:12.734978 1971324 start.go:563] Will wait 60s for crictl version
	I0120 14:02:12.735064 1971324 ssh_runner.go:195] Run: which crictl
	I0120 14:02:12.739280 1971324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 14:02:12.786678 1971324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 14:02:12.786793 1971324 ssh_runner.go:195] Run: crio --version
	I0120 14:02:12.817307 1971324 ssh_runner.go:195] Run: crio --version
	I0120 14:02:12.852593 1971324 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 14:02:10.778869 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:12.782521 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:12.186380 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:14.187082 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:12.853765 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetIP
	I0120 14:02:12.856623 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:12.857007 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:12.857053 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:12.857241 1971324 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 14:02:12.861728 1971324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:02:12.877000 1971324 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-727256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-727
256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 14:02:12.877127 1971324 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 14:02:12.877169 1971324 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:02:12.929986 1971324 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 14:02:12.930071 1971324 ssh_runner.go:195] Run: which lz4
	I0120 14:02:12.934799 1971324 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 14:02:12.939259 1971324 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 14:02:12.939291 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 14:02:15.168447 1971324 crio.go:462] duration metric: took 2.233676027s to copy over tarball
	I0120 14:02:15.168587 1971324 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 14:02:12.857737 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:13.358043 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:13.857191 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:14.357168 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:14.857760 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:15.357900 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:15.857889 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:16.357039 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:16.857812 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:17.358144 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:15.279029 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:17.281259 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:16.687293 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:18.717798 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:17.552550 1971324 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.383920665s)
	I0120 14:02:17.552588 1971324 crio.go:469] duration metric: took 2.38410161s to extract the tarball
	I0120 14:02:17.552598 1971324 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 14:02:17.595819 1971324 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:02:17.649094 1971324 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 14:02:17.649124 1971324 cache_images.go:84] Images are preloaded, skipping loading
	I0120 14:02:17.649135 1971324 kubeadm.go:934] updating node { 192.168.72.104 8444 v1.32.0 crio true true} ...
	I0120 14:02:17.649302 1971324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-727256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-727256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 14:02:17.649381 1971324 ssh_runner.go:195] Run: crio config
	I0120 14:02:17.704561 1971324 cni.go:84] Creating CNI manager for ""
	I0120 14:02:17.704586 1971324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:02:17.704598 1971324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 14:02:17.704619 1971324 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.104 APIServerPort:8444 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-727256 NodeName:default-k8s-diff-port-727256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 14:02:17.704750 1971324 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.104
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-727256"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.104"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.104"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 14:02:17.704816 1971324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 14:02:17.716061 1971324 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 14:02:17.716155 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 14:02:17.727801 1971324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0120 14:02:17.748166 1971324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 14:02:17.766985 1971324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0120 14:02:17.787650 1971324 ssh_runner.go:195] Run: grep 192.168.72.104	control-plane.minikube.internal$ /etc/hosts
	I0120 14:02:17.791993 1971324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:02:17.808216 1971324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:02:17.961542 1971324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:02:17.984203 1971324 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256 for IP: 192.168.72.104
	I0120 14:02:17.984233 1971324 certs.go:194] generating shared ca certs ...
	I0120 14:02:17.984291 1971324 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:02:17.984557 1971324 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 14:02:17.984648 1971324 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 14:02:17.984666 1971324 certs.go:256] generating profile certs ...
	I0120 14:02:17.984792 1971324 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.key
	I0120 14:02:17.984852 1971324 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/apiserver.key.23647750
	I0120 14:02:17.984912 1971324 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/proxy-client.key
	I0120 14:02:17.985077 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem (1338 bytes)
	W0120 14:02:17.985121 1971324 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672_empty.pem, impossibly tiny 0 bytes
	I0120 14:02:17.985133 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 14:02:17.985155 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 14:02:17.985178 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 14:02:17.985198 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 14:02:17.985256 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:02:17.985878 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 14:02:18.048719 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 14:02:18.112171 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 14:02:18.145094 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 14:02:18.177563 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0120 14:02:18.207741 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 14:02:18.238193 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 14:02:18.267493 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 14:02:18.299204 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 14:02:18.326722 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem --> /usr/share/ca-certificates/1927672.pem (1338 bytes)
	I0120 14:02:18.354365 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /usr/share/ca-certificates/19276722.pem (1708 bytes)
	I0120 14:02:18.387004 1971324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 14:02:18.407331 1971324 ssh_runner.go:195] Run: openssl version
	I0120 14:02:18.414499 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 14:02:18.428237 1971324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:02:18.433437 1971324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:02:18.433525 1971324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:02:18.440279 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 14:02:18.453372 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1927672.pem && ln -fs /usr/share/ca-certificates/1927672.pem /etc/ssl/certs/1927672.pem"
	I0120 14:02:18.466685 1971324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1927672.pem
	I0120 14:02:18.472158 1971324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:58 /usr/share/ca-certificates/1927672.pem
	I0120 14:02:18.472221 1971324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1927672.pem
	I0120 14:02:18.479048 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1927672.pem /etc/ssl/certs/51391683.0"
	I0120 14:02:18.492239 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19276722.pem && ln -fs /usr/share/ca-certificates/19276722.pem /etc/ssl/certs/19276722.pem"
	I0120 14:02:18.505538 1971324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19276722.pem
	I0120 14:02:18.511360 1971324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:58 /usr/share/ca-certificates/19276722.pem
	I0120 14:02:18.511449 1971324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19276722.pem
	I0120 14:02:18.518290 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19276722.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 14:02:18.531250 1971324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 14:02:18.536241 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 14:02:18.543115 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 14:02:18.549735 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 14:02:18.556016 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 14:02:18.563051 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 14:02:18.569460 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 14:02:18.576252 1971324 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-727256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-727256
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:02:18.576356 1971324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 14:02:18.576422 1971324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:02:18.620494 1971324 cri.go:89] found id: ""
	I0120 14:02:18.620569 1971324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 14:02:18.631697 1971324 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 14:02:18.631720 1971324 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 14:02:18.631768 1971324 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 14:02:18.642156 1971324 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 14:02:18.643051 1971324 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-727256" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:02:18.643528 1971324 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-1920423/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-727256" cluster setting kubeconfig missing "default-k8s-diff-port-727256" context setting]
	I0120 14:02:18.644170 1971324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:02:18.668914 1971324 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 14:02:18.683072 1971324 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.104
	I0120 14:02:18.683114 1971324 kubeadm.go:1160] stopping kube-system containers ...
	I0120 14:02:18.683129 1971324 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 14:02:18.683183 1971324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:02:18.729285 1971324 cri.go:89] found id: ""
	I0120 14:02:18.729378 1971324 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 14:02:18.747615 1971324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:02:18.760814 1971324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:02:18.760838 1971324 kubeadm.go:157] found existing configuration files:
	
	I0120 14:02:18.760894 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0120 14:02:18.770641 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:02:18.770724 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:02:18.781179 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0120 14:02:18.792949 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:02:18.793028 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:02:18.804366 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0120 14:02:18.815263 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:02:18.815346 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:02:18.825942 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0120 14:02:18.835903 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:02:18.835982 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:02:18.845972 1971324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:02:18.859961 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:19.003738 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:19.608160 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:19.849647 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:19.912750 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:20.009660 1971324 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:02:20.009754 1971324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:20.510534 1971324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:17.857538 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:18.357133 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:18.857266 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:19.357682 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:19.857168 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:20.357018 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:20.857784 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:21.357312 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:21.857374 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:22.357052 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:19.469918 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:21.779262 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:21.010159 1971324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:21.032056 1971324 api_server.go:72] duration metric: took 1.022395241s to wait for apiserver process to appear ...
	I0120 14:02:21.032096 1971324 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:02:21.032131 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:21.032697 1971324 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": dial tcp 192.168.72.104:8444: connect: connection refused
	I0120 14:02:21.532363 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:23.847330 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 14:02:23.847369 1971324 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 14:02:23.847385 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:23.877401 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 14:02:23.877441 1971324 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 14:02:24.032826 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:24.039566 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:02:24.039598 1971324 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:02:24.532837 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:24.539028 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:02:24.539067 1971324 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:02:25.032465 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:25.039986 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0120 14:02:25.049377 1971324 api_server.go:141] control plane version: v1.32.0
	I0120 14:02:25.049420 1971324 api_server.go:131] duration metric: took 4.017316014s to wait for apiserver health ...
	I0120 14:02:25.049433 1971324 cni.go:84] Creating CNI manager for ""
	I0120 14:02:25.049442 1971324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:02:25.051482 1971324 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:02:21.185126 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:23.186698 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:25.052855 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:02:25.066022 1971324 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:02:25.095180 1971324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:02:25.114905 1971324 system_pods.go:59] 8 kube-system pods found
	I0120 14:02:25.114960 1971324 system_pods.go:61] "coredns-668d6bf9bc-bz5qj" [d7374913-ed7c-42dc-a94f-44e1e2c757a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 14:02:25.114976 1971324 system_pods.go:61] "etcd-default-k8s-diff-port-727256" [1b7d5ec9-7630-4785-9c45-41ecdb748a8a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 14:02:25.114986 1971324 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-727256" [41957bec-6146-4451-a58e-80cfc0954d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 14:02:25.115001 1971324 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-727256" [700634af-068c-43a9-93fd-cb10680f5547] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0120 14:02:25.115015 1971324 system_pods.go:61] "kube-proxy-q48xh" [714b43b5-29d9-4ffb-a571-d319ac71ea64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0120 14:02:25.115023 1971324 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-727256" [37e3619f-2d6c-4ffd-a8a2-e9e935b79342] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0120 14:02:25.115037 1971324 system_pods.go:61] "metrics-server-f79f97bbb-wgptn" [c1255c51-78a3-4f21-a054-b7eec52e8021] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:02:25.115045 1971324 system_pods.go:61] "storage-provisioner" [f116e0d4-4c99-46b2-bb50-448d19e948da] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 14:02:25.115063 1971324 system_pods.go:74] duration metric: took 19.845736ms to wait for pod list to return data ...
	I0120 14:02:25.115078 1971324 node_conditions.go:102] verifying NodePressure condition ...
	I0120 14:02:25.140084 1971324 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 14:02:25.140127 1971324 node_conditions.go:123] node cpu capacity is 2
	I0120 14:02:25.140143 1971324 node_conditions.go:105] duration metric: took 25.059269ms to run NodePressure ...
	I0120 14:02:25.140170 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:25.471605 1971324 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0120 14:02:25.475871 1971324 kubeadm.go:739] kubelet initialised
	I0120 14:02:25.475897 1971324 kubeadm.go:740] duration metric: took 4.262299ms waiting for restarted kubelet to initialise ...
	I0120 14:02:25.475907 1971324 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:02:25.481730 1971324 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:22.857953 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:23.357118 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:23.857846 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:24.357974 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:24.858083 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:25.357532 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:25.857724 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:26.357640 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:26.857695 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:27.357848 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:24.279782 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:26.777640 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:28.778330 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:25.686765 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:28.186774 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:27.488205 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:29.990080 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:27.857637 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:28.357980 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:28.857073 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:29.357768 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:29.857689 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:30.358021 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:30.857725 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:31.357087 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:31.857093 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:32.358124 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:30.783033 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:33.279302 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:30.685246 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:33.195660 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:31.992749 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:34.489038 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:32.857233 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:33.357972 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:33.857268 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:34.357580 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:34.857317 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:35.357391 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:35.858044 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:36.357666 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:36.857501 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:37.357800 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:35.282839 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:37.778057 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:35.685341 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:37.685683 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:40.185648 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:35.989736 1971324 pod_ready.go:93] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:35.989764 1971324 pod_ready.go:82] duration metric: took 10.507995257s for pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:35.989775 1971324 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:35.994950 1971324 pod_ready.go:93] pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:35.994974 1971324 pod_ready.go:82] duration metric: took 5.193222ms for pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:35.994984 1971324 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:38.002261 1971324 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:39.002130 1971324 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:39.002163 1971324 pod_ready.go:82] duration metric: took 3.007172332s for pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.002175 1971324 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.007066 1971324 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:39.007092 1971324 pod_ready.go:82] duration metric: took 4.909894ms for pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.007102 1971324 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-q48xh" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.011300 1971324 pod_ready.go:93] pod "kube-proxy-q48xh" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:39.011327 1971324 pod_ready.go:82] duration metric: took 4.217903ms for pod "kube-proxy-q48xh" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.011339 1971324 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.019267 1971324 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:39.019290 1971324 pod_ready.go:82] duration metric: took 7.94282ms for pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.019299 1971324 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:37.857302 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:38.357923 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:38.857475 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:39.357375 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:39.857802 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:40.357852 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:40.857000 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:41.357100 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:41.857256 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:42.357310 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:39.778127 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:41.778931 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:42.185876 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:44.685996 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:41.026382 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:43.026822 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:45.526641 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:42.857156 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:43.357487 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:43.857399 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:44.357134 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:44.857807 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:45.358043 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:45.857787 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:46.357476 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:46.857480 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:47.357059 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:44.284374 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:46.778063 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:46.686210 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:49.185352 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:48.025036 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:50.027377 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:47.857389 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:48.357917 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:48.857908 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:49.357865 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:49.857103 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:50.357844 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:50.856981 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:51.357722 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:51.857389 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:52.357276 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:49.277771 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:51.280318 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:53.778876 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:51.685546 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:53.685814 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:52.526770 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:55.026492 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:52.857418 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:53.357813 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:53.857620 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:54.357209 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:54.857914 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:55.357510 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:55.857571 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:56.357067 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:56.857492 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:57.357062 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:55.783020 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:58.280672 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:55.686206 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:58.186818 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:57.026925 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:59.525553 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:57.857477 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:02:57.857614 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:02:57.905881 1971155 cri.go:89] found id: ""
	I0120 14:02:57.905912 1971155 logs.go:282] 0 containers: []
	W0120 14:02:57.905922 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:02:57.905929 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:02:57.905992 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:02:57.943622 1971155 cri.go:89] found id: ""
	I0120 14:02:57.943651 1971155 logs.go:282] 0 containers: []
	W0120 14:02:57.943661 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:02:57.943667 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:02:57.943723 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:02:57.988526 1971155 cri.go:89] found id: ""
	I0120 14:02:57.988562 1971155 logs.go:282] 0 containers: []
	W0120 14:02:57.988574 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:02:57.988583 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:02:57.988651 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:02:58.031485 1971155 cri.go:89] found id: ""
	I0120 14:02:58.031521 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.031534 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:02:58.031543 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:02:58.031610 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:02:58.068567 1971155 cri.go:89] found id: ""
	I0120 14:02:58.068598 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.068607 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:02:58.068613 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:02:58.068671 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:02:58.111132 1971155 cri.go:89] found id: ""
	I0120 14:02:58.111163 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.111172 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:02:58.111179 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:02:58.111249 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:02:58.148303 1971155 cri.go:89] found id: ""
	I0120 14:02:58.148347 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.148360 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:02:58.148369 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:02:58.148451 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:02:58.185950 1971155 cri.go:89] found id: ""
	I0120 14:02:58.185999 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.186012 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:02:58.186045 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:02:58.186067 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:02:58.240918 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:02:58.240967 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:02:58.257093 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:02:58.257146 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:02:58.414616 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:02:58.414647 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:02:58.414668 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:02:58.492488 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:02:58.492552 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:01.040468 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:01.055229 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:01.055334 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:01.096466 1971155 cri.go:89] found id: ""
	I0120 14:03:01.096504 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.096517 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:01.096527 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:01.096598 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:01.134935 1971155 cri.go:89] found id: ""
	I0120 14:03:01.134970 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.134981 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:01.134991 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:01.135067 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:01.173227 1971155 cri.go:89] found id: ""
	I0120 14:03:01.173260 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.173270 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:01.173276 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:01.173330 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:01.214239 1971155 cri.go:89] found id: ""
	I0120 14:03:01.214284 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.214295 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:01.214305 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:01.214371 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:01.256599 1971155 cri.go:89] found id: ""
	I0120 14:03:01.256637 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.256650 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:01.256659 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:01.256739 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:01.296996 1971155 cri.go:89] found id: ""
	I0120 14:03:01.297032 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.297061 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:01.297070 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:01.297138 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:01.332783 1971155 cri.go:89] found id: ""
	I0120 14:03:01.332823 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.332834 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:01.332843 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:01.332918 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:01.369365 1971155 cri.go:89] found id: ""
	I0120 14:03:01.369406 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.369421 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:01.369434 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:01.369451 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:01.414439 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:01.414480 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:01.471195 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:01.471246 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:01.486430 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:01.486462 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:01.574798 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:01.574828 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:01.574845 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:00.778133 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:02.778231 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:00.685031 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:03.185220 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:01.527499 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:04.025999 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:04.171235 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:04.188065 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:04.188156 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:04.228357 1971155 cri.go:89] found id: ""
	I0120 14:03:04.228387 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.228400 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:04.228409 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:04.228467 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:04.267565 1971155 cri.go:89] found id: ""
	I0120 14:03:04.267610 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.267624 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:04.267635 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:04.267711 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:04.307392 1971155 cri.go:89] found id: ""
	I0120 14:03:04.307425 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.307434 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:04.307440 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:04.307508 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:04.349729 1971155 cri.go:89] found id: ""
	I0120 14:03:04.349767 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.349778 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:04.349786 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:04.349870 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:04.387475 1971155 cri.go:89] found id: ""
	I0120 14:03:04.387501 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.387509 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:04.387516 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:04.387572 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:04.427468 1971155 cri.go:89] found id: ""
	I0120 14:03:04.427509 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.427530 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:04.427539 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:04.427612 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:04.466639 1971155 cri.go:89] found id: ""
	I0120 14:03:04.466670 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.466679 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:04.466686 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:04.466741 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:04.504757 1971155 cri.go:89] found id: ""
	I0120 14:03:04.504787 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.504795 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:04.504806 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:04.504818 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:04.557733 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:04.557779 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:04.573354 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:04.573387 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:04.650417 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:04.650446 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:04.650463 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:04.733072 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:04.733120 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:07.274982 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:07.290100 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:07.290193 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:07.332977 1971155 cri.go:89] found id: ""
	I0120 14:03:07.333017 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.333029 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:07.333038 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:07.333115 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:07.372892 1971155 cri.go:89] found id: ""
	I0120 14:03:07.372933 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.372945 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:07.372954 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:07.373026 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:07.425530 1971155 cri.go:89] found id: ""
	I0120 14:03:07.425577 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.425590 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:07.425600 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:07.425662 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:04.778368 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:06.778647 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:05.684845 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:07.685532 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:06.026498 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:08.526091 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:10.526915 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:07.476155 1971155 cri.go:89] found id: ""
	I0120 14:03:07.476184 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.476193 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:07.476199 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:07.476254 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:07.521877 1971155 cri.go:89] found id: ""
	I0120 14:03:07.521914 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.521926 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:07.521939 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:07.522011 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:07.560355 1971155 cri.go:89] found id: ""
	I0120 14:03:07.560395 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.560409 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:07.560418 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:07.560487 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:07.600264 1971155 cri.go:89] found id: ""
	I0120 14:03:07.600300 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.600312 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:07.600320 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:07.600394 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:07.638852 1971155 cri.go:89] found id: ""
	I0120 14:03:07.638882 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.638891 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:07.638904 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:07.638921 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:07.697341 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:07.697388 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:07.712419 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:07.712453 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:07.790196 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:07.790219 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:07.790236 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:07.865638 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:07.865691 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:10.411816 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:10.425923 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:10.425995 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:10.469227 1971155 cri.go:89] found id: ""
	I0120 14:03:10.469260 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.469271 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:10.469279 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:10.469335 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:10.507955 1971155 cri.go:89] found id: ""
	I0120 14:03:10.507982 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.507991 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:10.507997 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:10.508064 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:10.543101 1971155 cri.go:89] found id: ""
	I0120 14:03:10.543127 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.543135 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:10.543141 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:10.543211 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:10.585664 1971155 cri.go:89] found id: ""
	I0120 14:03:10.585707 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.585722 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:10.585731 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:10.585798 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:10.623476 1971155 cri.go:89] found id: ""
	I0120 14:03:10.623509 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.623519 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:10.623526 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:10.623696 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:10.660175 1971155 cri.go:89] found id: ""
	I0120 14:03:10.660212 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.660236 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:10.660243 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:10.660328 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:10.701559 1971155 cri.go:89] found id: ""
	I0120 14:03:10.701587 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.701595 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:10.701601 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:10.701660 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:10.745904 1971155 cri.go:89] found id: ""
	I0120 14:03:10.745934 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.745946 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:10.745960 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:10.745977 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:10.797159 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:10.797195 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:10.811080 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:10.811114 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:10.892397 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:10.892453 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:10.892474 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:10.974483 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:10.974548 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:09.277769 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:11.279861 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:13.778783 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:10.188443 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:12.684802 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:14.685044 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:13.026831 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:15.028964 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:13.520017 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:13.534970 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:13.535057 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:13.572408 1971155 cri.go:89] found id: ""
	I0120 14:03:13.572447 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.572460 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:13.572469 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:13.572551 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:13.611551 1971155 cri.go:89] found id: ""
	I0120 14:03:13.611584 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.611594 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:13.611602 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:13.611679 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:13.648597 1971155 cri.go:89] found id: ""
	I0120 14:03:13.648643 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.648659 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:13.648669 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:13.648746 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:13.688240 1971155 cri.go:89] found id: ""
	I0120 14:03:13.688273 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.688284 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:13.688292 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:13.688359 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:13.726824 1971155 cri.go:89] found id: ""
	I0120 14:03:13.726858 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.726870 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:13.726879 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:13.726960 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:13.763355 1971155 cri.go:89] found id: ""
	I0120 14:03:13.763393 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.763406 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:13.763426 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:13.763520 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:13.805672 1971155 cri.go:89] found id: ""
	I0120 14:03:13.805709 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.805721 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:13.805729 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:13.805808 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:13.843604 1971155 cri.go:89] found id: ""
	I0120 14:03:13.843639 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.843647 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:13.843658 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:13.843677 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:13.900719 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:13.900769 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:13.917734 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:13.917769 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:13.989979 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:13.990004 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:13.990023 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:14.065519 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:14.065568 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:16.608887 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:16.624966 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:16.625095 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:16.663250 1971155 cri.go:89] found id: ""
	I0120 14:03:16.663286 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.663299 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:16.663309 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:16.663381 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:16.705075 1971155 cri.go:89] found id: ""
	I0120 14:03:16.705109 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.705121 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:16.705129 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:16.705203 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:16.743136 1971155 cri.go:89] found id: ""
	I0120 14:03:16.743172 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.743183 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:16.743196 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:16.743259 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:16.781721 1971155 cri.go:89] found id: ""
	I0120 14:03:16.781749 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.781759 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:16.781768 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:16.781838 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:16.819156 1971155 cri.go:89] found id: ""
	I0120 14:03:16.819186 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.819195 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:16.819201 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:16.819267 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:16.857239 1971155 cri.go:89] found id: ""
	I0120 14:03:16.857271 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.857282 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:16.857291 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:16.857366 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:16.896447 1971155 cri.go:89] found id: ""
	I0120 14:03:16.896484 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.896494 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:16.896500 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:16.896573 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:16.933838 1971155 cri.go:89] found id: ""
	I0120 14:03:16.933868 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.933884 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:16.933895 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:16.933912 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:16.947603 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:16.947641 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:17.030769 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:17.030797 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:17.030817 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:17.113685 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:17.113733 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:17.156727 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:17.156762 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:16.279194 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:18.279451 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:16.686668 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:19.185833 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:17.525194 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:19.526034 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:19.718569 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:19.732512 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:19.732591 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:19.767932 1971155 cri.go:89] found id: ""
	I0120 14:03:19.767967 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.767978 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:19.767986 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:19.768060 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:19.803810 1971155 cri.go:89] found id: ""
	I0120 14:03:19.803849 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.803862 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:19.803870 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:19.803939 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:19.843834 1971155 cri.go:89] found id: ""
	I0120 14:03:19.843862 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.843873 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:19.843886 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:19.843958 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:19.881732 1971155 cri.go:89] found id: ""
	I0120 14:03:19.881763 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.881774 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:19.881781 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:19.881848 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:19.924381 1971155 cri.go:89] found id: ""
	I0120 14:03:19.924417 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.924428 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:19.924437 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:19.924502 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:19.970958 1971155 cri.go:89] found id: ""
	I0120 14:03:19.970987 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.970996 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:19.971004 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:19.971065 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:20.012745 1971155 cri.go:89] found id: ""
	I0120 14:03:20.012781 1971155 logs.go:282] 0 containers: []
	W0120 14:03:20.012792 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:20.012800 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:20.012874 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:20.051390 1971155 cri.go:89] found id: ""
	I0120 14:03:20.051440 1971155 logs.go:282] 0 containers: []
	W0120 14:03:20.051458 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:20.051472 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:20.051496 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:20.110400 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:20.110442 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:20.127460 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:20.127494 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:20.204395 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:20.204421 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:20.204438 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:20.285467 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:20.285512 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:20.281009 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:22.778157 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:21.685011 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:24.185145 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:21.527945 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:24.028130 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:22.839418 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:22.853700 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:22.853779 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:22.889955 1971155 cri.go:89] found id: ""
	I0120 14:03:22.889984 1971155 logs.go:282] 0 containers: []
	W0120 14:03:22.889992 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:22.889998 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:22.890051 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:22.927006 1971155 cri.go:89] found id: ""
	I0120 14:03:22.927035 1971155 logs.go:282] 0 containers: []
	W0120 14:03:22.927044 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:22.927050 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:22.927114 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:22.964259 1971155 cri.go:89] found id: ""
	I0120 14:03:22.964295 1971155 logs.go:282] 0 containers: []
	W0120 14:03:22.964321 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:22.964330 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:22.964394 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:23.002226 1971155 cri.go:89] found id: ""
	I0120 14:03:23.002259 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.002268 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:23.002274 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:23.002331 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:23.039583 1971155 cri.go:89] found id: ""
	I0120 14:03:23.039620 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.039633 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:23.039641 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:23.039722 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:23.078733 1971155 cri.go:89] found id: ""
	I0120 14:03:23.078761 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.078770 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:23.078803 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:23.078878 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:23.114333 1971155 cri.go:89] found id: ""
	I0120 14:03:23.114390 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.114403 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:23.114411 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:23.114485 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:23.150761 1971155 cri.go:89] found id: ""
	I0120 14:03:23.150797 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.150809 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:23.150824 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:23.150839 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:23.213320 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:23.213384 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:23.228681 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:23.228717 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:23.301816 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:23.301842 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:23.301858 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:23.387061 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:23.387117 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:25.931823 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:25.945038 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:25.945134 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:25.981262 1971155 cri.go:89] found id: ""
	I0120 14:03:25.981315 1971155 logs.go:282] 0 containers: []
	W0120 14:03:25.981330 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:25.981340 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:25.981420 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:26.018945 1971155 cri.go:89] found id: ""
	I0120 14:03:26.018980 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.018993 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:26.019001 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:26.019080 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:26.060446 1971155 cri.go:89] found id: ""
	I0120 14:03:26.060477 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.060487 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:26.060496 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:26.060563 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:26.097720 1971155 cri.go:89] found id: ""
	I0120 14:03:26.097761 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.097782 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:26.097792 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:26.097861 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:26.133561 1971155 cri.go:89] found id: ""
	I0120 14:03:26.133593 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.133605 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:26.133614 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:26.133701 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:26.175091 1971155 cri.go:89] found id: ""
	I0120 14:03:26.175124 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.175136 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:26.175144 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:26.175206 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:26.214747 1971155 cri.go:89] found id: ""
	I0120 14:03:26.214779 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.214788 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:26.214794 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:26.214864 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:26.264211 1971155 cri.go:89] found id: ""
	I0120 14:03:26.264244 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.264255 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:26.264269 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:26.264291 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:26.282025 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:26.282062 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:26.359793 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:26.359820 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:26.359842 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:26.447177 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:26.447224 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:26.487488 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:26.487523 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:25.279187 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:27.282700 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:26.186599 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:28.684816 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:26.527177 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:29.026067 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:29.039824 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:29.054535 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:29.054619 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:29.096202 1971155 cri.go:89] found id: ""
	I0120 14:03:29.096233 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.096245 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:29.096254 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:29.096316 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:29.139442 1971155 cri.go:89] found id: ""
	I0120 14:03:29.139475 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.139485 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:29.139492 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:29.139565 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:29.181278 1971155 cri.go:89] found id: ""
	I0120 14:03:29.181320 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.181334 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:29.181343 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:29.181424 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:29.222018 1971155 cri.go:89] found id: ""
	I0120 14:03:29.222049 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.222058 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:29.222072 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:29.222129 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:29.263028 1971155 cri.go:89] found id: ""
	I0120 14:03:29.263071 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.263083 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:29.263092 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:29.263167 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:29.307933 1971155 cri.go:89] found id: ""
	I0120 14:03:29.307965 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.307973 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:29.307980 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:29.308040 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:29.344204 1971155 cri.go:89] found id: ""
	I0120 14:03:29.344237 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.344250 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:29.344258 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:29.344327 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:29.381577 1971155 cri.go:89] found id: ""
	I0120 14:03:29.381604 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.381613 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:29.381623 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:29.381636 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:29.396553 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:29.396592 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:29.476381 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:29.476406 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:29.476420 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:29.552542 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:29.552586 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:29.597585 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:29.597619 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:32.150749 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:32.166160 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:32.166240 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:32.209621 1971155 cri.go:89] found id: ""
	I0120 14:03:32.209657 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.209671 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:32.209682 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:32.209764 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:32.250347 1971155 cri.go:89] found id: ""
	I0120 14:03:32.250386 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.250397 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:32.250405 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:32.250477 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:32.291555 1971155 cri.go:89] found id: ""
	I0120 14:03:32.291587 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.291599 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:32.291607 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:32.291677 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:32.329975 1971155 cri.go:89] found id: ""
	I0120 14:03:32.330015 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.330023 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:32.330030 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:32.330107 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:32.371131 1971155 cri.go:89] found id: ""
	I0120 14:03:32.371170 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.371190 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:32.371199 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:32.371273 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:32.409613 1971155 cri.go:89] found id: ""
	I0120 14:03:32.409653 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.409665 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:32.409672 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:32.409732 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:29.778719 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:32.279358 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:30.686778 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:33.184968 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:35.185398 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:31.026580 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:33.028333 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:35.527445 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:32.448898 1971155 cri.go:89] found id: ""
	I0120 14:03:32.448932 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.448944 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:32.448953 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:32.449029 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:32.486258 1971155 cri.go:89] found id: ""
	I0120 14:03:32.486295 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.486308 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:32.486323 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:32.486340 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:32.538196 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:32.538238 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:32.553140 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:32.553173 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:32.640124 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:32.640147 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:32.640161 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:32.725556 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:32.725615 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:35.276962 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:35.292662 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:35.292754 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:35.332066 1971155 cri.go:89] found id: ""
	I0120 14:03:35.332099 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.332111 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:35.332119 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:35.332188 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:35.369977 1971155 cri.go:89] found id: ""
	I0120 14:03:35.370010 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.370024 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:35.370030 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:35.370099 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:35.412630 1971155 cri.go:89] found id: ""
	I0120 14:03:35.412663 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.412672 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:35.412680 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:35.412746 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:35.450785 1971155 cri.go:89] found id: ""
	I0120 14:03:35.450819 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.450830 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:35.450838 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:35.450908 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:35.496877 1971155 cri.go:89] found id: ""
	I0120 14:03:35.496930 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.496943 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:35.496950 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:35.497021 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:35.538626 1971155 cri.go:89] found id: ""
	I0120 14:03:35.538662 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.538675 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:35.538684 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:35.538768 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:35.579144 1971155 cri.go:89] found id: ""
	I0120 14:03:35.579181 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.579195 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:35.579204 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:35.579283 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:35.623935 1971155 cri.go:89] found id: ""
	I0120 14:03:35.623985 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.623997 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:35.624038 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:35.624074 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:35.664682 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:35.664716 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:35.722441 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:35.722505 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:35.752215 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:35.752246 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:35.843666 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:35.843692 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:35.843706 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:34.778378 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:36.778557 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:37.685015 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:39.689385 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:38.026699 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:40.526689 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:38.427318 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:38.441690 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:38.441767 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:38.481605 1971155 cri.go:89] found id: ""
	I0120 14:03:38.481636 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.481648 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:38.481655 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:38.481726 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:38.518378 1971155 cri.go:89] found id: ""
	I0120 14:03:38.518415 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.518427 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:38.518436 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:38.518512 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:38.561625 1971155 cri.go:89] found id: ""
	I0120 14:03:38.561674 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.561687 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:38.561696 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:38.561764 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:38.603557 1971155 cri.go:89] found id: ""
	I0120 14:03:38.603585 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.603593 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:38.603600 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:38.603671 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:38.644242 1971155 cri.go:89] found id: ""
	I0120 14:03:38.644276 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.644289 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:38.644298 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:38.644364 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:38.686124 1971155 cri.go:89] found id: ""
	I0120 14:03:38.686154 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.686166 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:38.686175 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:38.686257 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:38.731861 1971155 cri.go:89] found id: ""
	I0120 14:03:38.731896 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.731906 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:38.731915 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:38.732002 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:38.773494 1971155 cri.go:89] found id: ""
	I0120 14:03:38.773522 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.773533 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:38.773579 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:38.773602 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:38.827125 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:38.827168 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:38.841903 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:38.841939 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:38.928392 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:38.928423 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:38.928440 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:39.008227 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:39.008270 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:41.554775 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:41.568912 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:41.568983 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:41.616455 1971155 cri.go:89] found id: ""
	I0120 14:03:41.616483 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.616491 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:41.616505 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:41.616584 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:41.654958 1971155 cri.go:89] found id: ""
	I0120 14:03:41.654995 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.655007 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:41.655014 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:41.655091 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:41.695758 1971155 cri.go:89] found id: ""
	I0120 14:03:41.695800 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.695814 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:41.695824 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:41.695901 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:41.733782 1971155 cri.go:89] found id: ""
	I0120 14:03:41.733815 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.733826 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:41.733834 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:41.733906 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:41.771097 1971155 cri.go:89] found id: ""
	I0120 14:03:41.771129 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.771141 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:41.771150 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:41.771266 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:41.808590 1971155 cri.go:89] found id: ""
	I0120 14:03:41.808629 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.808643 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:41.808652 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:41.808733 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:41.848943 1971155 cri.go:89] found id: ""
	I0120 14:03:41.848971 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.848982 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:41.848990 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:41.849057 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:41.886267 1971155 cri.go:89] found id: ""
	I0120 14:03:41.886302 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.886315 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:41.886328 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:41.886354 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:41.903471 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:41.903519 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:41.980320 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:41.980342 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:41.980358 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:42.060823 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:42.060868 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:42.102476 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:42.102511 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:39.278753 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:41.778436 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:42.189707 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:44.686641 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:43.026630 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:45.526315 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:44.677081 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:44.691997 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:44.692094 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:44.732561 1971155 cri.go:89] found id: ""
	I0120 14:03:44.732599 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.732611 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:44.732620 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:44.732701 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:44.774215 1971155 cri.go:89] found id: ""
	I0120 14:03:44.774250 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.774259 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:44.774266 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:44.774330 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:44.815997 1971155 cri.go:89] found id: ""
	I0120 14:03:44.816031 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.816040 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:44.816046 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:44.816109 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:44.853946 1971155 cri.go:89] found id: ""
	I0120 14:03:44.853984 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.853996 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:44.854004 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:44.854070 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:44.896969 1971155 cri.go:89] found id: ""
	I0120 14:03:44.897006 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.897018 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:44.897028 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:44.897120 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:44.942458 1971155 cri.go:89] found id: ""
	I0120 14:03:44.942496 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.942508 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:44.942518 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:44.942648 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:44.984028 1971155 cri.go:89] found id: ""
	I0120 14:03:44.984059 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.984084 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:44.984094 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:44.984173 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:45.026096 1971155 cri.go:89] found id: ""
	I0120 14:03:45.026130 1971155 logs.go:282] 0 containers: []
	W0120 14:03:45.026141 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:45.026153 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:45.026169 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:45.110471 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:45.110527 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:45.154855 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:45.154892 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:45.214465 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:45.214511 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:45.232020 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:45.232054 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:45.312932 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:44.278244 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:46.777269 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:48.777901 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:47.184802 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:49.184874 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:47.526520 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:50.026151 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:47.813923 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:47.828326 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:47.828422 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:47.865843 1971155 cri.go:89] found id: ""
	I0120 14:03:47.865875 1971155 logs.go:282] 0 containers: []
	W0120 14:03:47.865884 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:47.865891 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:47.865952 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:47.913554 1971155 cri.go:89] found id: ""
	I0120 14:03:47.913582 1971155 logs.go:282] 0 containers: []
	W0120 14:03:47.913590 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:47.913597 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:47.913655 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:47.970084 1971155 cri.go:89] found id: ""
	I0120 14:03:47.970115 1971155 logs.go:282] 0 containers: []
	W0120 14:03:47.970135 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:47.970144 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:47.970205 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:48.016631 1971155 cri.go:89] found id: ""
	I0120 14:03:48.016737 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.016750 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:48.016758 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:48.016833 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:48.073208 1971155 cri.go:89] found id: ""
	I0120 14:03:48.073253 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.073266 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:48.073276 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:48.073387 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:48.111638 1971155 cri.go:89] found id: ""
	I0120 14:03:48.111680 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.111692 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:48.111701 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:48.111783 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:48.155605 1971155 cri.go:89] found id: ""
	I0120 14:03:48.155640 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.155653 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:48.155661 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:48.155732 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:48.204162 1971155 cri.go:89] found id: ""
	I0120 14:03:48.204204 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.204219 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:48.204234 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:48.204257 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:48.259987 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:48.260042 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:48.275801 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:48.275832 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:48.361115 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:48.361150 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:48.361170 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:48.443876 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:48.443921 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:50.992981 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:51.009283 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:51.009370 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:51.052492 1971155 cri.go:89] found id: ""
	I0120 14:03:51.052523 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.052533 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:51.052540 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:51.052616 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:51.096548 1971155 cri.go:89] found id: ""
	I0120 14:03:51.096575 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.096583 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:51.096589 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:51.096655 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:51.138339 1971155 cri.go:89] found id: ""
	I0120 14:03:51.138369 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.138378 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:51.138385 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:51.138456 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:51.181155 1971155 cri.go:89] found id: ""
	I0120 14:03:51.181188 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.181198 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:51.181205 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:51.181261 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:51.223988 1971155 cri.go:89] found id: ""
	I0120 14:03:51.224026 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.224038 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:51.224045 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:51.224106 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:51.261863 1971155 cri.go:89] found id: ""
	I0120 14:03:51.261896 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.261905 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:51.261911 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:51.261976 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:51.303816 1971155 cri.go:89] found id: ""
	I0120 14:03:51.303850 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.303862 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:51.303870 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:51.303946 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:51.340897 1971155 cri.go:89] found id: ""
	I0120 14:03:51.340935 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.340946 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:51.340960 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:51.340983 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:51.393462 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:51.393512 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:51.409330 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:51.409361 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:51.483485 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:51.483510 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:51.483525 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:51.560879 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:51.560920 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:50.779106 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:53.278544 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:51.185101 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:53.186284 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:55.186474 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:52.026377 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:54.526778 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:54.106090 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:54.121203 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:54.121282 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:54.171790 1971155 cri.go:89] found id: ""
	I0120 14:03:54.171818 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.171826 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:54.171833 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:54.171888 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:54.215021 1971155 cri.go:89] found id: ""
	I0120 14:03:54.215058 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.215069 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:54.215076 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:54.215138 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:54.252537 1971155 cri.go:89] found id: ""
	I0120 14:03:54.252565 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.252573 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:54.252580 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:54.252635 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:54.291366 1971155 cri.go:89] found id: ""
	I0120 14:03:54.291396 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.291405 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:54.291411 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:54.291482 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:54.328162 1971155 cri.go:89] found id: ""
	I0120 14:03:54.328206 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.328219 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:54.328227 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:54.328310 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:54.366862 1971155 cri.go:89] found id: ""
	I0120 14:03:54.366898 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.366908 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:54.366920 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:54.366996 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:54.404501 1971155 cri.go:89] found id: ""
	I0120 14:03:54.404534 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.404543 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:54.404549 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:54.404609 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:54.443468 1971155 cri.go:89] found id: ""
	I0120 14:03:54.443504 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.443518 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:54.443531 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:54.443554 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:54.458948 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:54.458993 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:54.542353 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:54.542379 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:54.542400 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:54.629014 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:54.629060 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:54.673822 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:54.673857 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:57.228212 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:57.242552 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:57.242667 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:57.282187 1971155 cri.go:89] found id: ""
	I0120 14:03:57.282215 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.282225 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:57.282232 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:57.282306 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:57.319233 1971155 cri.go:89] found id: ""
	I0120 14:03:57.319260 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.319268 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:57.319279 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:57.319340 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:57.356706 1971155 cri.go:89] found id: ""
	I0120 14:03:57.356730 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.356738 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:57.356744 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:57.356805 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:57.396553 1971155 cri.go:89] found id: ""
	I0120 14:03:57.396583 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.396594 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:57.396600 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:57.396657 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:55.783799 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:58.278376 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:57.186658 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:59.686959 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:57.027014 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:59.525725 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:57.434802 1971155 cri.go:89] found id: ""
	I0120 14:03:57.434835 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.434847 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:57.434855 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:57.434927 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:57.471668 1971155 cri.go:89] found id: ""
	I0120 14:03:57.471699 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.471710 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:57.471719 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:57.471789 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:57.512283 1971155 cri.go:89] found id: ""
	I0120 14:03:57.512318 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.512329 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:57.512337 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:57.512409 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:57.549948 1971155 cri.go:89] found id: ""
	I0120 14:03:57.549977 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.549986 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:57.549996 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:57.550010 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:57.639160 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:57.639213 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:57.685920 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:57.685954 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:57.743891 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:57.743935 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:57.760181 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:57.760223 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:57.840777 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:00.342573 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:00.360314 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:00.360397 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:00.407962 1971155 cri.go:89] found id: ""
	I0120 14:04:00.407997 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.408010 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:00.408020 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:00.408086 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:00.450962 1971155 cri.go:89] found id: ""
	I0120 14:04:00.451040 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.451053 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:00.451062 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:00.451129 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:00.487180 1971155 cri.go:89] found id: ""
	I0120 14:04:00.487216 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.487227 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:00.487239 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:00.487311 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:00.530835 1971155 cri.go:89] found id: ""
	I0120 14:04:00.530864 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.530873 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:00.530880 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:00.530948 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:00.570212 1971155 cri.go:89] found id: ""
	I0120 14:04:00.570245 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.570257 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:00.570265 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:00.570335 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:00.611685 1971155 cri.go:89] found id: ""
	I0120 14:04:00.611716 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.611725 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:00.611731 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:00.611785 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:00.649370 1971155 cri.go:89] found id: ""
	I0120 14:04:00.649410 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.649423 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:00.649432 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:00.649498 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:00.685853 1971155 cri.go:89] found id: ""
	I0120 14:04:00.685889 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.685901 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:00.685915 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:00.685930 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:00.737015 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:00.737051 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:00.751682 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:00.751716 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:00.830222 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:00.830247 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:00.830262 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:00.918955 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:00.919003 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:00.279152 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:02.778569 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:02.185020 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:04.185796 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:01.526915 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:03.529074 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:03.461705 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:03.478063 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:03.478144 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:03.525289 1971155 cri.go:89] found id: ""
	I0120 14:04:03.525326 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.525339 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:03.525349 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:03.525427 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:03.565302 1971155 cri.go:89] found id: ""
	I0120 14:04:03.565339 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.565351 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:03.565360 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:03.565441 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:03.607021 1971155 cri.go:89] found id: ""
	I0120 14:04:03.607048 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.607056 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:03.607063 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:03.607122 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:03.650398 1971155 cri.go:89] found id: ""
	I0120 14:04:03.650425 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.650433 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:03.650445 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:03.650513 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:03.689498 1971155 cri.go:89] found id: ""
	I0120 14:04:03.689531 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.689539 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:03.689545 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:03.689607 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:03.726928 1971155 cri.go:89] found id: ""
	I0120 14:04:03.726965 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.726978 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:03.726987 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:03.727054 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:03.764493 1971155 cri.go:89] found id: ""
	I0120 14:04:03.764532 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.764544 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:03.764552 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:03.764622 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:03.803514 1971155 cri.go:89] found id: ""
	I0120 14:04:03.803550 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.803562 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:03.803575 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:03.803595 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:03.847009 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:03.847045 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:03.900078 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:03.900124 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:03.916146 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:03.916179 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:03.988068 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:03.988102 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:03.988121 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:06.568829 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:06.583335 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:06.583422 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:06.628247 1971155 cri.go:89] found id: ""
	I0120 14:04:06.628283 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.628296 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:06.628305 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:06.628365 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:06.673764 1971155 cri.go:89] found id: ""
	I0120 14:04:06.673792 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.673804 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:06.673820 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:06.673892 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:06.714328 1971155 cri.go:89] found id: ""
	I0120 14:04:06.714361 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.714373 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:06.714381 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:06.714458 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:06.750935 1971155 cri.go:89] found id: ""
	I0120 14:04:06.750975 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.750987 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:06.750996 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:06.751061 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:06.788944 1971155 cri.go:89] found id: ""
	I0120 14:04:06.788975 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.788982 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:06.788988 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:06.789056 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:06.826176 1971155 cri.go:89] found id: ""
	I0120 14:04:06.826216 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.826228 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:06.826245 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:06.826322 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:06.864607 1971155 cri.go:89] found id: ""
	I0120 14:04:06.864640 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.864649 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:06.864656 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:06.864741 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:06.901814 1971155 cri.go:89] found id: ""
	I0120 14:04:06.901889 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.901909 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:06.901922 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:06.901944 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:06.953391 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:06.953439 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:06.967876 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:06.967914 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:07.055449 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:07.055486 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:07.055511 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:07.138279 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:07.138328 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:04.778659 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:06.780874 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:06.188401 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:08.685683 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:06.026194 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:08.525961 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:10.527780 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:09.684182 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:09.699353 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:09.699432 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:09.738834 1971155 cri.go:89] found id: ""
	I0120 14:04:09.738864 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.738875 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:09.738883 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:09.738963 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:09.774822 1971155 cri.go:89] found id: ""
	I0120 14:04:09.774852 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.774864 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:09.774872 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:09.774942 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:09.813132 1971155 cri.go:89] found id: ""
	I0120 14:04:09.813167 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.813179 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:09.813187 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:09.813258 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:09.850809 1971155 cri.go:89] found id: ""
	I0120 14:04:09.850844 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.850855 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:09.850864 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:09.850947 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:09.889768 1971155 cri.go:89] found id: ""
	I0120 14:04:09.889802 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.889813 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:09.889821 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:09.889900 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:09.932037 1971155 cri.go:89] found id: ""
	I0120 14:04:09.932073 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.932081 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:09.932087 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:09.932150 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:09.970153 1971155 cri.go:89] found id: ""
	I0120 14:04:09.970197 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.970210 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:09.970218 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:09.970287 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:10.009506 1971155 cri.go:89] found id: ""
	I0120 14:04:10.009535 1971155 logs.go:282] 0 containers: []
	W0120 14:04:10.009544 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:10.009555 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:10.009568 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:10.097837 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:10.097896 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:10.140488 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:10.140534 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:10.195531 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:10.195575 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:10.210277 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:10.210322 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:10.296146 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:09.279024 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:11.279883 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:13.776738 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:11.178584 1969949 pod_ready.go:82] duration metric: took 4m0.000311545s for pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace to be "Ready" ...
	E0120 14:04:11.178646 1969949 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0120 14:04:11.178676 1969949 pod_ready.go:39] duration metric: took 4m14.547669609s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:04:11.178719 1969949 kubeadm.go:597] duration metric: took 4m22.42355041s to restartPrimaryControlPlane
	W0120 14:04:11.178845 1969949 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:04:11.178885 1969949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:04:13.027079 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:15.027945 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:12.796944 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:12.810984 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:12.811085 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:12.849374 1971155 cri.go:89] found id: ""
	I0120 14:04:12.849413 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.849426 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:12.849435 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:12.849509 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:12.885922 1971155 cri.go:89] found id: ""
	I0120 14:04:12.885951 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.885960 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:12.885967 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:12.886039 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:12.922978 1971155 cri.go:89] found id: ""
	I0120 14:04:12.923019 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.923031 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:12.923040 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:12.923108 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:12.960519 1971155 cri.go:89] found id: ""
	I0120 14:04:12.960563 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.960572 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:12.960578 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:12.960688 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:12.997662 1971155 cri.go:89] found id: ""
	I0120 14:04:12.997702 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.997715 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:12.997724 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:12.997803 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:13.035613 1971155 cri.go:89] found id: ""
	I0120 14:04:13.035640 1971155 logs.go:282] 0 containers: []
	W0120 14:04:13.035651 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:13.035660 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:13.035736 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:13.073354 1971155 cri.go:89] found id: ""
	I0120 14:04:13.073389 1971155 logs.go:282] 0 containers: []
	W0120 14:04:13.073401 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:13.073410 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:13.073480 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:13.113735 1971155 cri.go:89] found id: ""
	I0120 14:04:13.113771 1971155 logs.go:282] 0 containers: []
	W0120 14:04:13.113780 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:13.113791 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:13.113804 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:13.170858 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:13.170906 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:13.186341 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:13.186375 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:13.260514 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:13.260540 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:13.260557 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:13.347360 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:13.347411 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:15.891859 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:15.907144 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:15.907238 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:15.943638 1971155 cri.go:89] found id: ""
	I0120 14:04:15.943675 1971155 logs.go:282] 0 containers: []
	W0120 14:04:15.943686 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:15.943693 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:15.943753 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:15.981820 1971155 cri.go:89] found id: ""
	I0120 14:04:15.981868 1971155 logs.go:282] 0 containers: []
	W0120 14:04:15.981882 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:15.981891 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:15.981971 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:16.019987 1971155 cri.go:89] found id: ""
	I0120 14:04:16.020058 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.020071 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:16.020080 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:16.020156 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:16.059245 1971155 cri.go:89] found id: ""
	I0120 14:04:16.059278 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.059288 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:16.059295 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:16.059370 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:16.095081 1971155 cri.go:89] found id: ""
	I0120 14:04:16.095123 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.095136 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:16.095146 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:16.095224 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:16.134357 1971155 cri.go:89] found id: ""
	I0120 14:04:16.134403 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.134416 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:16.134425 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:16.134497 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:16.177729 1971155 cri.go:89] found id: ""
	I0120 14:04:16.177762 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.177774 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:16.177783 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:16.177864 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:16.214324 1971155 cri.go:89] found id: ""
	I0120 14:04:16.214360 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.214371 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:16.214392 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:16.214412 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:16.270670 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:16.270716 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:16.326541 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:16.326587 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:16.343430 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:16.343469 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:16.429522 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:16.429554 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:16.429572 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:15.778836 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:18.279084 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:17.526959 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:20.027030 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:19.008712 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:19.024398 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:19.024489 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:19.064138 1971155 cri.go:89] found id: ""
	I0120 14:04:19.064169 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.064178 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:19.064184 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:19.064253 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:19.102639 1971155 cri.go:89] found id: ""
	I0120 14:04:19.102672 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.102681 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:19.102687 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:19.102755 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:19.141058 1971155 cri.go:89] found id: ""
	I0120 14:04:19.141105 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.141119 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:19.141130 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:19.141200 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:19.179972 1971155 cri.go:89] found id: ""
	I0120 14:04:19.180004 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.180013 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:19.180025 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:19.180095 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:19.219516 1971155 cri.go:89] found id: ""
	I0120 14:04:19.219549 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.219562 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:19.219571 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:19.219634 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:19.262728 1971155 cri.go:89] found id: ""
	I0120 14:04:19.262764 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.262776 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:19.262785 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:19.262860 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:19.299472 1971155 cri.go:89] found id: ""
	I0120 14:04:19.299527 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.299539 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:19.299548 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:19.299634 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:19.341054 1971155 cri.go:89] found id: ""
	I0120 14:04:19.341095 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.341107 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:19.341119 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:19.341133 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:19.426002 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:19.426058 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:19.469471 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:19.469504 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:19.524625 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:19.524661 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:19.539365 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:19.539398 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:19.620545 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:22.122261 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:22.137515 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:22.137590 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:22.177366 1971155 cri.go:89] found id: ""
	I0120 14:04:22.177405 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.177417 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:22.177426 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:22.177494 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:22.215596 1971155 cri.go:89] found id: ""
	I0120 14:04:22.215641 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.215653 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:22.215662 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:22.215734 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:22.252783 1971155 cri.go:89] found id: ""
	I0120 14:04:22.252820 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.252832 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:22.252841 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:22.252917 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:22.295160 1971155 cri.go:89] found id: ""
	I0120 14:04:22.295199 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.295213 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:22.295221 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:22.295284 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:22.334614 1971155 cri.go:89] found id: ""
	I0120 14:04:22.334651 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.334662 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:22.334672 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:22.334754 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:22.372516 1971155 cri.go:89] found id: ""
	I0120 14:04:22.372545 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.372554 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:22.372562 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:22.372633 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:22.412784 1971155 cri.go:89] found id: ""
	I0120 14:04:22.412819 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.412827 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:22.412833 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:22.412895 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:20.778968 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:22.779314 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:22.526513 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:24.527843 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:22.449865 1971155 cri.go:89] found id: ""
	I0120 14:04:22.449900 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.449909 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:22.449920 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:22.449934 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:22.464473 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:22.464514 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:22.546804 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:22.546835 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:22.546858 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:22.624614 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:22.624664 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:22.679053 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:22.679085 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:25.238495 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:25.254177 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:25.254253 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:25.299255 1971155 cri.go:89] found id: ""
	I0120 14:04:25.299291 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.299300 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:25.299308 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:25.299373 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:25.337454 1971155 cri.go:89] found id: ""
	I0120 14:04:25.337481 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.337490 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:25.337496 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:25.337556 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:25.375094 1971155 cri.go:89] found id: ""
	I0120 14:04:25.375129 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.375139 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:25.375148 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:25.375224 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:25.413177 1971155 cri.go:89] found id: ""
	I0120 14:04:25.413206 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.413217 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:25.413223 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:25.413288 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:25.448775 1971155 cri.go:89] found id: ""
	I0120 14:04:25.448812 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.448821 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:25.448827 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:25.448883 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:25.484560 1971155 cri.go:89] found id: ""
	I0120 14:04:25.484591 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.484600 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:25.484607 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:25.484660 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:25.522990 1971155 cri.go:89] found id: ""
	I0120 14:04:25.523029 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.523041 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:25.523049 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:25.523128 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:25.560861 1971155 cri.go:89] found id: ""
	I0120 14:04:25.560899 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.560910 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:25.560925 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:25.560941 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:25.614479 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:25.614528 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:25.630030 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:25.630070 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:25.704721 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:25.704758 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:25.704781 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:25.782265 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:25.782309 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:25.279994 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:27.778659 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:27.027167 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:29.525787 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:28.332905 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:28.351517 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:28.351594 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:28.394070 1971155 cri.go:89] found id: ""
	I0120 14:04:28.394110 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.394122 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:28.394130 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:28.394204 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:28.445893 1971155 cri.go:89] found id: ""
	I0120 14:04:28.445924 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.445934 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:28.445940 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:28.446034 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:28.511766 1971155 cri.go:89] found id: ""
	I0120 14:04:28.511801 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.511811 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:28.511820 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:28.511891 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:28.558333 1971155 cri.go:89] found id: ""
	I0120 14:04:28.558369 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.558382 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:28.558391 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:28.558469 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:28.608161 1971155 cri.go:89] found id: ""
	I0120 14:04:28.608196 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.608207 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:28.608215 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:28.608288 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:28.645545 1971155 cri.go:89] found id: ""
	I0120 14:04:28.645576 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.645585 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:28.645592 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:28.645651 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:28.682795 1971155 cri.go:89] found id: ""
	I0120 14:04:28.682833 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.682845 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:28.682854 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:28.682943 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:28.719887 1971155 cri.go:89] found id: ""
	I0120 14:04:28.719918 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.719928 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:28.719941 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:28.719965 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:28.776644 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:28.776683 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:28.791778 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:28.791812 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:28.870972 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:28.871001 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:28.871027 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:28.950524 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:28.950568 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:31.494786 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:31.508961 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:31.509041 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:31.550239 1971155 cri.go:89] found id: ""
	I0120 14:04:31.550275 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.550287 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:31.550295 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:31.550374 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:31.589113 1971155 cri.go:89] found id: ""
	I0120 14:04:31.589149 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.589161 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:31.589169 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:31.589271 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:31.626500 1971155 cri.go:89] found id: ""
	I0120 14:04:31.626537 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.626547 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:31.626556 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:31.626655 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:31.661941 1971155 cri.go:89] found id: ""
	I0120 14:04:31.661972 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.661980 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:31.661987 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:31.662079 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:31.699223 1971155 cri.go:89] found id: ""
	I0120 14:04:31.699269 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.699283 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:31.699291 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:31.699359 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:31.736559 1971155 cri.go:89] found id: ""
	I0120 14:04:31.736589 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.736601 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:31.736608 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:31.736680 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:31.774254 1971155 cri.go:89] found id: ""
	I0120 14:04:31.774296 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.774304 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:31.774314 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:31.774460 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:31.813913 1971155 cri.go:89] found id: ""
	I0120 14:04:31.813952 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.813964 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:31.813977 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:31.813991 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:31.864887 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:31.864936 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:31.880250 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:31.880286 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:31.955208 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:31.955232 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:31.955247 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:32.039812 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:32.039875 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:29.780496 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:32.277638 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:31.526304 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:33.527156 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:34.582127 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:34.595661 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:34.595751 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:34.637306 1971155 cri.go:89] found id: ""
	I0120 14:04:34.637343 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.637355 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:34.637367 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:34.637440 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:34.676881 1971155 cri.go:89] found id: ""
	I0120 14:04:34.676913 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.676924 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:34.676929 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:34.676985 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:34.715677 1971155 cri.go:89] found id: ""
	I0120 14:04:34.715712 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.715723 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:34.715737 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:34.715801 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:34.754821 1971155 cri.go:89] found id: ""
	I0120 14:04:34.754855 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.754867 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:34.754875 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:34.754947 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:34.793093 1971155 cri.go:89] found id: ""
	I0120 14:04:34.793124 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.793133 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:34.793139 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:34.793200 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:34.830252 1971155 cri.go:89] found id: ""
	I0120 14:04:34.830285 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.830295 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:34.830302 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:34.830370 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:34.869405 1971155 cri.go:89] found id: ""
	I0120 14:04:34.869436 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.869447 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:34.869455 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:34.869528 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:34.910676 1971155 cri.go:89] found id: ""
	I0120 14:04:34.910708 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.910721 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:34.910735 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:34.910751 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:34.961049 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:34.961094 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:34.976224 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:34.976260 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:35.049407 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:35.049434 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:35.049452 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:35.133338 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:35.133396 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:34.279211 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:36.778511 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:39.032716 1969949 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.853801532s)
	I0120 14:04:39.032805 1969949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:04:39.056153 1969949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:04:39.077937 1969949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:04:39.097957 1969949 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:04:39.097986 1969949 kubeadm.go:157] found existing configuration files:
	
	I0120 14:04:39.098074 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:04:39.127178 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:04:39.127249 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:04:39.140640 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:04:39.152447 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:04:39.152516 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:04:39.174543 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:04:39.185436 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:04:39.185521 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:04:39.196720 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:04:39.207028 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:04:39.207105 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:04:39.217474 1969949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:04:39.273124 1969949 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 14:04:39.273208 1969949 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:04:39.402646 1969949 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:04:39.402821 1969949 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:04:39.402964 1969949 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 14:04:39.411696 1969949 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:04:39.413689 1969949 out.go:235]   - Generating certificates and keys ...
	I0120 14:04:39.413807 1969949 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:04:39.413895 1969949 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:04:39.414021 1969949 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:04:39.414131 1969949 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:04:39.414240 1969949 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:04:39.414333 1969949 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:04:39.414455 1969949 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:04:39.414538 1969949 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:04:39.414693 1969949 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:04:39.414814 1969949 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:04:39.414881 1969949 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:04:39.414976 1969949 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:04:39.516867 1969949 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:04:39.700148 1969949 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 14:04:39.838568 1969949 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:04:40.020807 1969949 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:04:40.083569 1969949 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:04:40.083953 1969949 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:04:40.086599 1969949 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:04:40.088383 1969949 out.go:235]   - Booting up control plane ...
	I0120 14:04:40.088515 1969949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:04:40.090041 1969949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:04:40.092450 1969949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:04:40.114859 1969949 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:04:40.124692 1969949 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:04:40.124773 1969949 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:04:36.025541 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:38.027612 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:40.528385 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:37.676133 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:37.690435 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:37.690520 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:37.732788 1971155 cri.go:89] found id: ""
	I0120 14:04:37.732824 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.732837 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:37.732846 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:37.732914 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:37.770338 1971155 cri.go:89] found id: ""
	I0120 14:04:37.770375 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.770387 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:37.770395 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:37.770461 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:37.813580 1971155 cri.go:89] found id: ""
	I0120 14:04:37.813612 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.813639 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:37.813645 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:37.813702 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:37.854706 1971155 cri.go:89] found id: ""
	I0120 14:04:37.854740 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.854751 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:37.854759 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:37.854841 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:37.891577 1971155 cri.go:89] found id: ""
	I0120 14:04:37.891607 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.891616 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:37.891623 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:37.891681 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:37.928718 1971155 cri.go:89] found id: ""
	I0120 14:04:37.928750 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.928762 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:37.928772 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:37.928844 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:37.964166 1971155 cri.go:89] found id: ""
	I0120 14:04:37.964203 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.964211 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:37.964218 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:37.964279 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:38.005257 1971155 cri.go:89] found id: ""
	I0120 14:04:38.005299 1971155 logs.go:282] 0 containers: []
	W0120 14:04:38.005311 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:38.005325 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:38.005340 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:38.058706 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:38.058756 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:38.073507 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:38.073584 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:38.149050 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:38.149073 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:38.149091 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:38.227105 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:38.227163 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:40.772041 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:40.787399 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:40.787471 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:40.828186 1971155 cri.go:89] found id: ""
	I0120 14:04:40.828226 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.828247 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:40.828257 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:40.828327 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:40.869532 1971155 cri.go:89] found id: ""
	I0120 14:04:40.869561 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.869573 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:40.869581 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:40.869670 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:40.916288 1971155 cri.go:89] found id: ""
	I0120 14:04:40.916324 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.916343 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:40.916357 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:40.916425 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:40.953018 1971155 cri.go:89] found id: ""
	I0120 14:04:40.953053 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.953066 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:40.953076 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:40.953150 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:40.993977 1971155 cri.go:89] found id: ""
	I0120 14:04:40.994012 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.994024 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:40.994033 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:40.994104 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:41.037652 1971155 cri.go:89] found id: ""
	I0120 14:04:41.037678 1971155 logs.go:282] 0 containers: []
	W0120 14:04:41.037685 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:41.037692 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:41.037756 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:41.085826 1971155 cri.go:89] found id: ""
	I0120 14:04:41.085855 1971155 logs.go:282] 0 containers: []
	W0120 14:04:41.085864 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:41.085873 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:41.085950 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:41.128902 1971155 cri.go:89] found id: ""
	I0120 14:04:41.128939 1971155 logs.go:282] 0 containers: []
	W0120 14:04:41.128951 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:41.128965 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:41.128984 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:41.182933 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:41.182976 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:41.198454 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:41.198493 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:41.278062 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:41.278090 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:41.278106 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:41.359935 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:41.359983 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:39.279853 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:41.778833 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:43.779056 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:40.281534 1969949 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 14:04:40.281697 1969949 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 14:04:41.283107 1969949 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001641988s
	I0120 14:04:41.283223 1969949 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 14:04:43.026341 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:45.027225 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:46.784985 1969949 kubeadm.go:310] [api-check] The API server is healthy after 5.501686403s
	I0120 14:04:46.800497 1969949 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 14:04:46.826466 1969949 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 14:04:46.872907 1969949 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 14:04:46.873201 1969949 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-648067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 14:04:46.893113 1969949 kubeadm.go:310] [bootstrap-token] Using token: hll471.vkmzt8kk1d060cyb
	I0120 14:04:43.908548 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:43.927397 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:43.927492 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:43.975131 1971155 cri.go:89] found id: ""
	I0120 14:04:43.975160 1971155 logs.go:282] 0 containers: []
	W0120 14:04:43.975169 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:43.975175 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:43.975243 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:44.020970 1971155 cri.go:89] found id: ""
	I0120 14:04:44.021006 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.021018 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:44.021027 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:44.021135 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:44.067873 1971155 cri.go:89] found id: ""
	I0120 14:04:44.067914 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.067927 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:44.067936 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:44.068010 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:44.108047 1971155 cri.go:89] found id: ""
	I0120 14:04:44.108082 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.108093 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:44.108099 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:44.108161 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:44.149416 1971155 cri.go:89] found id: ""
	I0120 14:04:44.149449 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.149458 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:44.149466 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:44.149521 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:44.189664 1971155 cri.go:89] found id: ""
	I0120 14:04:44.189701 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.189712 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:44.189720 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:44.189787 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:44.233518 1971155 cri.go:89] found id: ""
	I0120 14:04:44.233548 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.233558 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:44.233565 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:44.233635 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:44.279568 1971155 cri.go:89] found id: ""
	I0120 14:04:44.279603 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.279614 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:44.279626 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:44.279641 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:44.348693 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:44.348742 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:44.363510 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:44.363546 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:44.437107 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:44.437132 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:44.437146 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:44.516463 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:44.516512 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:47.065723 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:47.081983 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:47.082120 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:47.122906 1971155 cri.go:89] found id: ""
	I0120 14:04:47.122945 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.122958 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:47.122969 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:47.123060 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:47.166879 1971155 cri.go:89] found id: ""
	I0120 14:04:47.166916 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.166928 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:47.166937 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:47.167012 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:47.213675 1971155 cri.go:89] found id: ""
	I0120 14:04:47.213706 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.213715 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:47.213722 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:47.213778 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:47.254655 1971155 cri.go:89] found id: ""
	I0120 14:04:47.254692 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.254702 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:47.254711 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:47.254787 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:47.297680 1971155 cri.go:89] found id: ""
	I0120 14:04:47.297718 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.297731 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:47.297741 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:47.297829 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:47.337150 1971155 cri.go:89] found id: ""
	I0120 14:04:47.337179 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.337188 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:47.337194 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:47.337258 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:47.376190 1971155 cri.go:89] found id: ""
	I0120 14:04:47.376223 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.376234 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:47.376242 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:47.376343 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:47.424425 1971155 cri.go:89] found id: ""
	I0120 14:04:47.424465 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.424477 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:47.424491 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:47.424508 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:46.894672 1969949 out.go:235]   - Configuring RBAC rules ...
	I0120 14:04:46.894865 1969949 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 14:04:46.901221 1969949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 14:04:46.911875 1969949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 14:04:46.916856 1969949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 14:04:46.922245 1969949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 14:04:46.929769 1969949 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 14:04:47.194825 1969949 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 14:04:47.629977 1969949 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 14:04:48.194241 1969949 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 14:04:48.195072 1969949 kubeadm.go:310] 
	I0120 14:04:48.195176 1969949 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 14:04:48.195193 1969949 kubeadm.go:310] 
	I0120 14:04:48.195309 1969949 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 14:04:48.195319 1969949 kubeadm.go:310] 
	I0120 14:04:48.195353 1969949 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 14:04:48.195444 1969949 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 14:04:48.195583 1969949 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 14:04:48.195610 1969949 kubeadm.go:310] 
	I0120 14:04:48.195693 1969949 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 14:04:48.195705 1969949 kubeadm.go:310] 
	I0120 14:04:48.195767 1969949 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 14:04:48.195776 1969949 kubeadm.go:310] 
	I0120 14:04:48.195891 1969949 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 14:04:48.196003 1969949 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 14:04:48.196119 1969949 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 14:04:48.196143 1969949 kubeadm.go:310] 
	I0120 14:04:48.196264 1969949 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 14:04:48.196353 1969949 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 14:04:48.196374 1969949 kubeadm.go:310] 
	I0120 14:04:48.196486 1969949 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hll471.vkmzt8kk1d060cyb \
	I0120 14:04:48.196623 1969949 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 \
	I0120 14:04:48.196658 1969949 kubeadm.go:310] 	--control-plane 
	I0120 14:04:48.196668 1969949 kubeadm.go:310] 
	I0120 14:04:48.196788 1969949 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 14:04:48.196797 1969949 kubeadm.go:310] 
	I0120 14:04:48.196887 1969949 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hll471.vkmzt8kk1d060cyb \
	I0120 14:04:48.196999 1969949 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 
	I0120 14:04:48.198034 1969949 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:04:48.198074 1969949 cni.go:84] Creating CNI manager for ""
	I0120 14:04:48.198087 1969949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:04:48.199935 1969949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:04:46.278851 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:48.279224 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:48.201356 1969949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:04:48.213317 1969949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:04:48.232194 1969949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:04:48.232407 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-648067 minikube.k8s.io/updated_at=2025_01_20T14_04_48_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=no-preload-648067 minikube.k8s.io/primary=true
	I0120 14:04:48.232407 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:48.270777 1969949 ops.go:34] apiserver oom_adj: -16
	I0120 14:04:48.458517 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:48.959588 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:49.459308 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:49.958914 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:47.529098 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:50.025867 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:47.439773 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:47.439807 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:47.515012 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:47.515040 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:47.515077 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:47.602215 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:47.602253 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:47.647880 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:47.647910 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:50.211849 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:50.225773 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:50.225855 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:50.268626 1971155 cri.go:89] found id: ""
	I0120 14:04:50.268663 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.268676 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:50.268686 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:50.268759 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:50.307523 1971155 cri.go:89] found id: ""
	I0120 14:04:50.307562 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.307575 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:50.307584 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:50.307656 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:50.347783 1971155 cri.go:89] found id: ""
	I0120 14:04:50.347820 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.347832 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:50.347840 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:50.347910 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:50.394427 1971155 cri.go:89] found id: ""
	I0120 14:04:50.394462 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.394474 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:50.394482 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:50.394564 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:50.434136 1971155 cri.go:89] found id: ""
	I0120 14:04:50.434168 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.434178 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:50.434187 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:50.434253 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:50.472220 1971155 cri.go:89] found id: ""
	I0120 14:04:50.472256 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.472268 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:50.472277 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:50.472341 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:50.513511 1971155 cri.go:89] found id: ""
	I0120 14:04:50.513541 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.513552 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:50.513560 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:50.513630 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:50.551073 1971155 cri.go:89] found id: ""
	I0120 14:04:50.551110 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.551121 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:50.551143 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:50.551163 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:50.565714 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:50.565744 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:50.651186 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:50.651214 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:50.651238 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:50.735185 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:50.735234 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:50.780258 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:50.780287 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:50.459078 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:50.958680 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:51.459194 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:51.958693 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:52.459624 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:52.569627 1969949 kubeadm.go:1113] duration metric: took 4.337296975s to wait for elevateKubeSystemPrivileges
	I0120 14:04:52.569667 1969949 kubeadm.go:394] duration metric: took 5m3.880867579s to StartCluster
	I0120 14:04:52.569696 1969949 settings.go:142] acquiring lock: {Name:mka51395421b81550a6a78e164d8aa21a9ede347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:04:52.569799 1969949 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:04:52.571249 1969949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:04:52.571569 1969949 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:04:52.571705 1969949 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:04:52.571794 1969949 addons.go:69] Setting storage-provisioner=true in profile "no-preload-648067"
	I0120 14:04:52.571819 1969949 addons.go:238] Setting addon storage-provisioner=true in "no-preload-648067"
	I0120 14:04:52.571819 1969949 addons.go:69] Setting default-storageclass=true in profile "no-preload-648067"
	W0120 14:04:52.571832 1969949 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:04:52.571833 1969949 addons.go:69] Setting metrics-server=true in profile "no-preload-648067"
	I0120 14:04:52.571850 1969949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-648067"
	I0120 14:04:52.571858 1969949 addons.go:238] Setting addon metrics-server=true in "no-preload-648067"
	W0120 14:04:52.571867 1969949 addons.go:247] addon metrics-server should already be in state true
	I0120 14:04:52.571861 1969949 addons.go:69] Setting dashboard=true in profile "no-preload-648067"
	I0120 14:04:52.571895 1969949 host.go:66] Checking if "no-preload-648067" exists ...
	I0120 14:04:52.571904 1969949 addons.go:238] Setting addon dashboard=true in "no-preload-648067"
	W0120 14:04:52.571919 1969949 addons.go:247] addon dashboard should already be in state true
	I0120 14:04:52.571873 1969949 host.go:66] Checking if "no-preload-648067" exists ...
	I0120 14:04:52.571957 1969949 host.go:66] Checking if "no-preload-648067" exists ...
	I0120 14:04:52.571816 1969949 config.go:182] Loaded profile config "no-preload-648067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:04:52.572249 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.572310 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.572388 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.572402 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.572429 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.572437 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.572388 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.572514 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.573278 1969949 out.go:177] * Verifying Kubernetes components...
	I0120 14:04:52.574697 1969949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:04:52.593445 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35109
	I0120 14:04:52.593972 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40497
	I0120 14:04:52.594196 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38403
	I0120 14:04:52.594251 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.594311 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0120 14:04:52.594456 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.594699 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.594819 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.595051 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.595058 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.595072 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.595075 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.595878 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.595883 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.595967 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.595978 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.595992 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.595994 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.596089 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.596460 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.596493 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.596495 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.596537 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.597392 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.597458 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.597937 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.597987 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.601273 1969949 addons.go:238] Setting addon default-storageclass=true in "no-preload-648067"
	W0120 14:04:52.601293 1969949 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:04:52.601328 1969949 host.go:66] Checking if "no-preload-648067" exists ...
	I0120 14:04:52.601665 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.601709 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.615800 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36643
	I0120 14:04:52.616400 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.617008 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.617030 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.617408 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.617522 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
	I0120 14:04:52.617864 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.618536 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.619193 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.619209 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.619284 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41705
	I0120 14:04:52.619647 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.619726 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.619909 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.620278 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.620296 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.620825 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.620943 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0120 14:04:52.621206 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.622123 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 14:04:52.622176 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.622220 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 14:04:52.623015 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 14:04:52.623665 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.623691 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.624470 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.625095 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.625143 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.625528 1969949 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:04:52.625540 1969949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:04:52.625550 1969949 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:04:52.627935 1969949 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:04:50.279663 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:52.280483 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:52.627964 1969949 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:04:52.627983 1969949 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:04:52.628010 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 14:04:52.628135 1969949 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:04:52.628150 1969949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:04:52.628172 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 14:04:52.629358 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:04:52.629377 1969949 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:04:52.629400 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 14:04:52.632446 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.633059 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.633123 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 14:04:52.633132 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 14:04:52.633166 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.633329 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 14:04:52.633372 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 14:04:52.633419 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.633507 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 14:04:52.633561 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 14:04:52.633761 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 14:04:52.634098 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.634129 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 14:04:52.634291 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 14:04:52.634635 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 14:04:52.634792 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 14:04:52.634816 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.635030 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 14:04:52.635288 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 14:04:52.635523 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 14:04:52.635673 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 14:04:52.649363 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46053
	I0120 14:04:52.649962 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.650624 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.650650 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.651046 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.651360 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.653362 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 14:04:52.653620 1969949 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:04:52.653637 1969949 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:04:52.653657 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 14:04:52.656950 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.657430 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 14:04:52.657459 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.657671 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 14:04:52.658472 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 14:04:52.658685 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 14:04:52.658860 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 14:04:52.827213 1969949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:04:52.892209 1969949 node_ready.go:35] waiting up to 6m0s for node "no-preload-648067" to be "Ready" ...
	I0120 14:04:52.927742 1969949 node_ready.go:49] node "no-preload-648067" has status "Ready":"True"
	I0120 14:04:52.927778 1969949 node_ready.go:38] duration metric: took 35.520382ms for node "no-preload-648067" to be "Ready" ...
	I0120 14:04:52.927792 1969949 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:04:52.945134 1969949 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace to be "Ready" ...
	I0120 14:04:52.998630 1969949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:04:53.015208 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:04:53.015251 1969949 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:04:53.050964 1969949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:04:53.053498 1969949 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:04:53.053531 1969949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:04:53.131884 1969949 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:04:53.131915 1969949 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:04:53.156697 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:04:53.156734 1969949 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:04:53.267300 1969949 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:04:53.267329 1969949 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:04:53.267739 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:04:53.267765 1969949 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:04:53.452299 1969949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:04:53.456705 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:53.456735 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:53.457124 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:53.457209 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:53.457135 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:53.457264 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:53.457356 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:53.457651 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:53.457667 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:53.461528 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:04:53.461555 1969949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:04:53.471471 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:53.471505 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:53.471848 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:53.471864 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:53.515363 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:04:53.515398 1969949 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 14:04:53.636963 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:04:53.637001 1969949 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:04:53.840979 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:04:53.841011 1969949 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:04:53.959045 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:04:53.959082 1969949 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 14:04:54.051582 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:04:54.051618 1969949 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 14:04:54.170664 1969949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:04:54.682801 1969949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.631779213s)
	I0120 14:04:54.682872 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:54.682887 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:54.683248 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:54.683271 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:54.683286 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:54.683296 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:54.683571 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:54.683595 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:54.683577 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:54.982997 1969949 pod_ready.go:103] pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:55.132956 1969949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.680599793s)
	I0120 14:04:55.133021 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:55.133038 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:55.133526 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:55.133549 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:55.133560 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:55.133568 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:55.133526 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:55.133807 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:55.133831 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:55.133847 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:55.133867 1969949 addons.go:479] Verifying addon metrics-server=true in "no-preload-648067"
	I0120 14:04:52.026070 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:54.026722 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:55.971683 1969949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.800920116s)
	I0120 14:04:55.971747 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:55.971763 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:55.972123 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:55.972144 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:55.972155 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:55.972163 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:55.972460 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:55.973844 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:55.973867 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:55.975729 1969949 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-648067 addons enable metrics-server
	
	I0120 14:04:55.977469 1969949 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 14:04:53.331081 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:53.346851 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:53.346935 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:53.390862 1971155 cri.go:89] found id: ""
	I0120 14:04:53.390901 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.390915 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:53.390924 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:53.391007 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:53.433455 1971155 cri.go:89] found id: ""
	I0120 14:04:53.433482 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.433491 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:53.433497 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:53.433555 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:53.477771 1971155 cri.go:89] found id: ""
	I0120 14:04:53.477805 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.477817 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:53.477826 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:53.477898 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:53.518330 1971155 cri.go:89] found id: ""
	I0120 14:04:53.518365 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.518375 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:53.518384 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:53.518461 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:53.557755 1971155 cri.go:89] found id: ""
	I0120 14:04:53.557804 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.557817 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:53.557827 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:53.557907 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:53.600681 1971155 cri.go:89] found id: ""
	I0120 14:04:53.600718 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.600730 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:53.600739 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:53.600836 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:53.644255 1971155 cri.go:89] found id: ""
	I0120 14:04:53.644291 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.644302 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:53.644311 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:53.644398 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:53.681445 1971155 cri.go:89] found id: ""
	I0120 14:04:53.681485 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.681498 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:53.681513 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:53.681529 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:53.737076 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:53.737131 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:53.755500 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:53.755551 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:53.846378 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:53.846416 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:53.846435 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:53.956291 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:53.956337 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:56.505456 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:56.521259 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:56.521352 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:56.572379 1971155 cri.go:89] found id: ""
	I0120 14:04:56.572415 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.572427 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:56.572435 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:56.572503 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:56.613123 1971155 cri.go:89] found id: ""
	I0120 14:04:56.613151 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.613162 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:56.613170 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:56.613237 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:56.650863 1971155 cri.go:89] found id: ""
	I0120 14:04:56.650896 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.650904 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:56.650911 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:56.650967 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:56.686709 1971155 cri.go:89] found id: ""
	I0120 14:04:56.686741 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.686749 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:56.686756 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:56.686813 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:56.722765 1971155 cri.go:89] found id: ""
	I0120 14:04:56.722794 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.722802 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:56.722809 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:56.722867 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:56.762188 1971155 cri.go:89] found id: ""
	I0120 14:04:56.762223 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.762235 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:56.762244 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:56.762321 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:56.807714 1971155 cri.go:89] found id: ""
	I0120 14:04:56.807741 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.807750 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:56.807756 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:56.807818 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:56.846817 1971155 cri.go:89] found id: ""
	I0120 14:04:56.846851 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.846860 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:56.846870 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:56.846884 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:56.919562 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:56.919593 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:56.919613 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:57.007957 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:57.008011 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:57.051295 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:57.051339 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:57.104114 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:57.104172 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:54.779036 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:56.272135 1970602 pod_ready.go:82] duration metric: took 4m0.000512351s for pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace to be "Ready" ...
	E0120 14:04:56.272179 1970602 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 14:04:56.272203 1970602 pod_ready.go:39] duration metric: took 4m14.631982517s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:04:56.272284 1970602 kubeadm.go:597] duration metric: took 4m21.961665482s to restartPrimaryControlPlane
	W0120 14:04:56.272373 1970602 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:04:56.272404 1970602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:04:55.979014 1969949 addons.go:514] duration metric: took 3.407316682s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 14:04:57.451990 1969949 pod_ready.go:103] pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:59.452924 1969949 pod_ready.go:103] pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:56.527827 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:59.026535 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:59.620229 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:59.637010 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:59.637114 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:59.680984 1971155 cri.go:89] found id: ""
	I0120 14:04:59.681020 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.681032 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:59.681041 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:59.681128 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:59.725445 1971155 cri.go:89] found id: ""
	I0120 14:04:59.725480 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.725492 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:59.725501 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:59.725573 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:59.767962 1971155 cri.go:89] found id: ""
	I0120 14:04:59.767999 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.768012 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:59.768020 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:59.768091 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:59.812201 1971155 cri.go:89] found id: ""
	I0120 14:04:59.812240 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.812252 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:59.812267 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:59.812335 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:59.853005 1971155 cri.go:89] found id: ""
	I0120 14:04:59.853034 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.853043 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:59.853049 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:59.853131 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:59.890747 1971155 cri.go:89] found id: ""
	I0120 14:04:59.890859 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.890878 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:59.890889 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:59.890969 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:59.934050 1971155 cri.go:89] found id: ""
	I0120 14:04:59.934090 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.934102 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:59.934110 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:59.934179 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:59.977069 1971155 cri.go:89] found id: ""
	I0120 14:04:59.977106 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.977119 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:59.977131 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:59.977150 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:00.070208 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:00.070261 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:00.116521 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:00.116557 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:00.175645 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:00.175695 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:00.192183 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:00.192228 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:00.273233 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:01.452480 1969949 pod_ready.go:93] pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.452519 1969949 pod_ready.go:82] duration metric: took 8.507352286s for pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.452534 1969949 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-86xhz" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.458456 1969949 pod_ready.go:93] pod "coredns-668d6bf9bc-86xhz" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.458488 1969949 pod_ready.go:82] duration metric: took 5.941966ms for pod "coredns-668d6bf9bc-86xhz" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.458503 1969949 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.465708 1969949 pod_ready.go:93] pod "etcd-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.465733 1969949 pod_ready.go:82] duration metric: took 7.221959ms for pod "etcd-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.465745 1969949 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.473764 1969949 pod_ready.go:93] pod "kube-apiserver-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.473796 1969949 pod_ready.go:82] duration metric: took 8.041648ms for pod "kube-apiserver-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.473815 1969949 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.480463 1969949 pod_ready.go:93] pod "kube-controller-manager-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.480494 1969949 pod_ready.go:82] duration metric: took 6.670074ms for pod "kube-controller-manager-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.480508 1969949 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kr6tq" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.849787 1969949 pod_ready.go:93] pod "kube-proxy-kr6tq" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.849820 1969949 pod_ready.go:82] duration metric: took 369.302403ms for pod "kube-proxy-kr6tq" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.849834 1969949 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:02.250242 1969949 pod_ready.go:93] pod "kube-scheduler-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:02.250279 1969949 pod_ready.go:82] duration metric: took 400.436958ms for pod "kube-scheduler-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:02.250289 1969949 pod_ready.go:39] duration metric: took 9.322472589s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:02.250305 1969949 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:05:02.250373 1969949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:02.307690 1969949 api_server.go:72] duration metric: took 9.736077102s to wait for apiserver process to appear ...
	I0120 14:05:02.307725 1969949 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:05:02.307751 1969949 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0120 14:05:02.312837 1969949 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0120 14:05:02.314012 1969949 api_server.go:141] control plane version: v1.32.0
	I0120 14:05:02.314038 1969949 api_server.go:131] duration metric: took 6.305469ms to wait for apiserver health ...
	I0120 14:05:02.314047 1969949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:05:02.454048 1969949 system_pods.go:59] 9 kube-system pods found
	I0120 14:05:02.454092 1969949 system_pods.go:61] "coredns-668d6bf9bc-2fbd7" [d2cf52fe-b375-47cd-a4bf-d7ef8e07a4b7] Running
	I0120 14:05:02.454099 1969949 system_pods.go:61] "coredns-668d6bf9bc-86xhz" [4af72226-8186-40e7-a923-01381cc52731] Running
	I0120 14:05:02.454104 1969949 system_pods.go:61] "etcd-no-preload-648067" [87debb8b-80bc-41cc-91f3-7b905ab8177c] Running
	I0120 14:05:02.454109 1969949 system_pods.go:61] "kube-apiserver-no-preload-648067" [6b1f5f1b-67ae-4ab2-a186-1c5224fcbc4e] Running
	I0120 14:05:02.454114 1969949 system_pods.go:61] "kube-controller-manager-no-preload-648067" [1bf90869-71a8-4459-a1b8-b59f78af8a8b] Running
	I0120 14:05:02.454119 1969949 system_pods.go:61] "kube-proxy-kr6tq" [462ab3d1-c225-4319-bac8-926a1e43a14d] Running
	I0120 14:05:02.454125 1969949 system_pods.go:61] "kube-scheduler-no-preload-648067" [38edfe65-9c58-4a24-b108-c22846010b97] Running
	I0120 14:05:02.454136 1969949 system_pods.go:61] "metrics-server-f79f97bbb-9kb5f" [fb8dd9df-cd37-4779-af22-4abd91dbc421] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:05:02.454144 1969949 system_pods.go:61] "storage-provisioner" [12bde765-1258-4689-b448-64208dd30638] Running
	I0120 14:05:02.454158 1969949 system_pods.go:74] duration metric: took 140.103109ms to wait for pod list to return data ...
	I0120 14:05:02.454172 1969949 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:05:02.650007 1969949 default_sa.go:45] found service account: "default"
	I0120 14:05:02.650050 1969949 default_sa.go:55] duration metric: took 195.869128ms for default service account to be created ...
	I0120 14:05:02.650064 1969949 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:05:02.853144 1969949 system_pods.go:87] 9 kube-system pods found
	I0120 14:05:01.028886 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:03.526512 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:05.527941 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:02.773877 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:02.788560 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:02.788661 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:02.838025 1971155 cri.go:89] found id: ""
	I0120 14:05:02.838061 1971155 logs.go:282] 0 containers: []
	W0120 14:05:02.838073 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:02.838082 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:02.838152 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:02.879106 1971155 cri.go:89] found id: ""
	I0120 14:05:02.879139 1971155 logs.go:282] 0 containers: []
	W0120 14:05:02.879150 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:02.879158 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:02.879226 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:02.919842 1971155 cri.go:89] found id: ""
	I0120 14:05:02.919883 1971155 logs.go:282] 0 containers: []
	W0120 14:05:02.919896 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:02.919905 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:02.919978 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:02.959612 1971155 cri.go:89] found id: ""
	I0120 14:05:02.959644 1971155 logs.go:282] 0 containers: []
	W0120 14:05:02.959656 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:02.959664 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:02.959737 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:03.018360 1971155 cri.go:89] found id: ""
	I0120 14:05:03.018392 1971155 logs.go:282] 0 containers: []
	W0120 14:05:03.018401 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:03.018408 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:03.018491 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:03.064749 1971155 cri.go:89] found id: ""
	I0120 14:05:03.064779 1971155 logs.go:282] 0 containers: []
	W0120 14:05:03.064788 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:03.064801 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:03.064874 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:03.114566 1971155 cri.go:89] found id: ""
	I0120 14:05:03.114595 1971155 logs.go:282] 0 containers: []
	W0120 14:05:03.114617 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:03.114626 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:03.114695 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:03.163672 1971155 cri.go:89] found id: ""
	I0120 14:05:03.163707 1971155 logs.go:282] 0 containers: []
	W0120 14:05:03.163720 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:03.163733 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:03.163750 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:03.243662 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:03.243718 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:03.261586 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:03.261629 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:03.358343 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:03.358377 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:03.358393 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:03.452803 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:03.452852 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:06.004224 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:06.019368 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:06.019459 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:06.068617 1971155 cri.go:89] found id: ""
	I0120 14:05:06.068655 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.068668 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:06.068678 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:06.068747 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:06.112806 1971155 cri.go:89] found id: ""
	I0120 14:05:06.112859 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.112874 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:06.112883 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:06.112960 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:06.150653 1971155 cri.go:89] found id: ""
	I0120 14:05:06.150695 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.150708 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:06.150716 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:06.150788 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:06.190915 1971155 cri.go:89] found id: ""
	I0120 14:05:06.190958 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.190973 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:06.190992 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:06.191077 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:06.237577 1971155 cri.go:89] found id: ""
	I0120 14:05:06.237616 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.237627 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:06.237636 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:06.237712 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:06.280826 1971155 cri.go:89] found id: ""
	I0120 14:05:06.280861 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.280873 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:06.280883 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:06.280958 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:06.317836 1971155 cri.go:89] found id: ""
	I0120 14:05:06.317872 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.317883 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:06.317892 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:06.317962 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:06.365531 1971155 cri.go:89] found id: ""
	I0120 14:05:06.365574 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.365587 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:06.365601 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:06.365626 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:06.460369 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:06.460403 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:06.460422 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:06.541919 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:06.541967 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:06.588755 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:06.588805 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:06.648087 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:06.648140 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:08.026139 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:10.026227 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:09.166758 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:09.184071 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:09.184193 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:09.222998 1971155 cri.go:89] found id: ""
	I0120 14:05:09.223035 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.223048 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:09.223056 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:09.223140 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:09.275875 1971155 cri.go:89] found id: ""
	I0120 14:05:09.275912 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.275926 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:09.275934 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:09.276006 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:09.320157 1971155 cri.go:89] found id: ""
	I0120 14:05:09.320192 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.320210 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:09.320218 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:09.320309 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:09.366463 1971155 cri.go:89] found id: ""
	I0120 14:05:09.366496 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.366505 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:09.366511 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:09.366582 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:09.414645 1971155 cri.go:89] found id: ""
	I0120 14:05:09.414675 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.414683 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:09.414689 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:09.414758 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:09.474004 1971155 cri.go:89] found id: ""
	I0120 14:05:09.474047 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.474059 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:09.474068 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:09.474153 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:09.536187 1971155 cri.go:89] found id: ""
	I0120 14:05:09.536217 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.536224 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:09.536230 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:09.536289 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:09.574100 1971155 cri.go:89] found id: ""
	I0120 14:05:09.574134 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.574142 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:09.574154 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:09.574167 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:09.620881 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:09.620923 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:09.676117 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:09.676177 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:09.692431 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:09.692473 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:09.768800 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:09.768831 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:09.768851 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:12.350771 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:12.365286 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:12.365374 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:12.402924 1971155 cri.go:89] found id: ""
	I0120 14:05:12.402966 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.402978 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:12.402998 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:12.403073 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:12.027431 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:14.526570 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:12.442108 1971155 cri.go:89] found id: ""
	I0120 14:05:12.442138 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.442147 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:12.442154 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:12.442211 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:12.484002 1971155 cri.go:89] found id: ""
	I0120 14:05:12.484058 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.484071 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:12.484078 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:12.484149 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:12.524060 1971155 cri.go:89] found id: ""
	I0120 14:05:12.524097 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.524109 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:12.524118 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:12.524201 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:12.563120 1971155 cri.go:89] found id: ""
	I0120 14:05:12.563147 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.563156 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:12.563163 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:12.563232 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:12.604782 1971155 cri.go:89] found id: ""
	I0120 14:05:12.604824 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.604838 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:12.604847 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:12.604925 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:12.642278 1971155 cri.go:89] found id: ""
	I0120 14:05:12.642305 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.642316 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:12.642326 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:12.642391 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:12.682274 1971155 cri.go:89] found id: ""
	I0120 14:05:12.682311 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.682323 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:12.682337 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:12.682353 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:12.773735 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:12.773785 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:12.825008 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:12.825049 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:12.873999 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:12.874042 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:12.888767 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:12.888804 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:12.965739 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:15.466957 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:15.493756 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:15.493839 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:15.538680 1971155 cri.go:89] found id: ""
	I0120 14:05:15.538709 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.538717 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:15.538724 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:15.538783 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:15.583029 1971155 cri.go:89] found id: ""
	I0120 14:05:15.583069 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.583081 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:15.583089 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:15.583174 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:15.623762 1971155 cri.go:89] found id: ""
	I0120 14:05:15.623801 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.623815 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:15.623825 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:15.623903 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:15.663883 1971155 cri.go:89] found id: ""
	I0120 14:05:15.663921 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.663930 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:15.663938 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:15.664013 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:15.701723 1971155 cri.go:89] found id: ""
	I0120 14:05:15.701758 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.701769 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:15.701778 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:15.701847 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:15.741612 1971155 cri.go:89] found id: ""
	I0120 14:05:15.741649 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.741661 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:15.741670 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:15.741736 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:15.783225 1971155 cri.go:89] found id: ""
	I0120 14:05:15.783257 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.783267 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:15.783275 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:15.783353 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:15.823664 1971155 cri.go:89] found id: ""
	I0120 14:05:15.823699 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.823713 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:15.823725 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:15.823740 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:15.876890 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:15.876936 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:15.892034 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:15.892077 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:15.967939 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:15.967966 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:15.967982 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:16.049913 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:16.049961 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:16.527187 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:19.028271 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:18.599849 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:18.613686 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:18.613756 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:18.656070 1971155 cri.go:89] found id: ""
	I0120 14:05:18.656104 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.656113 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:18.656120 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:18.656184 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:18.694391 1971155 cri.go:89] found id: ""
	I0120 14:05:18.694420 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.694429 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:18.694435 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:18.694499 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:18.733057 1971155 cri.go:89] found id: ""
	I0120 14:05:18.733094 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.733107 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:18.733114 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:18.733187 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:18.770955 1971155 cri.go:89] found id: ""
	I0120 14:05:18.770985 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.770993 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:18.770998 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:18.771065 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:18.805878 1971155 cri.go:89] found id: ""
	I0120 14:05:18.805912 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.805924 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:18.805932 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:18.806015 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:18.843859 1971155 cri.go:89] found id: ""
	I0120 14:05:18.843891 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.843904 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:18.843912 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:18.843981 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:18.882554 1971155 cri.go:89] found id: ""
	I0120 14:05:18.882585 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.882594 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:18.882622 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:18.882686 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:18.919206 1971155 cri.go:89] found id: ""
	I0120 14:05:18.919242 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.919258 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:18.919269 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:18.919284 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:18.969428 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:18.969476 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:18.984666 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:18.984702 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:19.060472 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:19.060502 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:19.060517 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:19.136205 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:19.136248 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:21.681437 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:21.695755 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:21.695840 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:21.732554 1971155 cri.go:89] found id: ""
	I0120 14:05:21.732587 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.732599 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:21.732609 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:21.732680 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:21.771047 1971155 cri.go:89] found id: ""
	I0120 14:05:21.771078 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.771087 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:21.771093 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:21.771149 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:21.806053 1971155 cri.go:89] found id: ""
	I0120 14:05:21.806084 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.806096 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:21.806104 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:21.806176 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:21.843647 1971155 cri.go:89] found id: ""
	I0120 14:05:21.843679 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.843692 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:21.843699 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:21.843767 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:21.878399 1971155 cri.go:89] found id: ""
	I0120 14:05:21.878437 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.878449 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:21.878458 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:21.878531 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:21.912712 1971155 cri.go:89] found id: ""
	I0120 14:05:21.912746 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.912757 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:21.912770 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:21.912842 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:21.948182 1971155 cri.go:89] found id: ""
	I0120 14:05:21.948214 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.948225 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:21.948241 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:21.948311 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:21.987907 1971155 cri.go:89] found id: ""
	I0120 14:05:21.987945 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.987954 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:21.987964 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:21.987977 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:22.037198 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:22.037244 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:22.053238 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:22.053293 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:22.125680 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:22.125703 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:22.125721 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:22.208323 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:22.208371 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:21.529531 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:24.025073 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:24.752796 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:24.769865 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:24.769967 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:24.809247 1971155 cri.go:89] found id: ""
	I0120 14:05:24.809282 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.809293 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:24.809305 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:24.809378 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:24.849761 1971155 cri.go:89] found id: ""
	I0120 14:05:24.849788 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.849797 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:24.849803 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:24.849867 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:24.892195 1971155 cri.go:89] found id: ""
	I0120 14:05:24.892226 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.892239 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:24.892249 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:24.892315 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:24.935367 1971155 cri.go:89] found id: ""
	I0120 14:05:24.935400 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.935412 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:24.935420 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:24.935488 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:24.980132 1971155 cri.go:89] found id: ""
	I0120 14:05:24.980164 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.980179 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:24.980188 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:24.980269 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:25.017365 1971155 cri.go:89] found id: ""
	I0120 14:05:25.017394 1971155 logs.go:282] 0 containers: []
	W0120 14:05:25.017405 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:25.017413 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:25.017487 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:25.059078 1971155 cri.go:89] found id: ""
	I0120 14:05:25.059115 1971155 logs.go:282] 0 containers: []
	W0120 14:05:25.059127 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:25.059163 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:25.059276 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:25.099507 1971155 cri.go:89] found id: ""
	I0120 14:05:25.099545 1971155 logs.go:282] 0 containers: []
	W0120 14:05:25.099557 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:25.099571 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:25.099588 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:25.174356 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:25.174385 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:25.174412 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:25.260260 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:25.260303 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:25.304309 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:25.304342 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:25.358340 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:25.358388 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:24.178761 1970602 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.906332562s)
	I0120 14:05:24.178859 1970602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:05:24.194902 1970602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:05:24.206080 1970602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:05:24.217371 1970602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:05:24.217398 1970602 kubeadm.go:157] found existing configuration files:
	
	I0120 14:05:24.217448 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:05:24.227549 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:05:24.227627 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:05:24.238584 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:05:24.249016 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:05:24.249171 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:05:24.260537 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:05:24.270728 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:05:24.270792 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:05:24.281345 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:05:24.291266 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:05:24.291344 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:05:24.302258 1970602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:05:24.477322 1970602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:05:26.026356 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:28.027425 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:30.525634 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:27.876603 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:27.892994 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:27.893071 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:27.931991 1971155 cri.go:89] found id: ""
	I0120 14:05:27.932048 1971155 logs.go:282] 0 containers: []
	W0120 14:05:27.932060 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:27.932068 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:27.932139 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:27.968882 1971155 cri.go:89] found id: ""
	I0120 14:05:27.968917 1971155 logs.go:282] 0 containers: []
	W0120 14:05:27.968926 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:27.968933 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:27.968998 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:28.009604 1971155 cri.go:89] found id: ""
	I0120 14:05:28.009635 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.009644 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:28.009650 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:28.009708 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:28.050036 1971155 cri.go:89] found id: ""
	I0120 14:05:28.050069 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.050080 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:28.050087 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:28.050156 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:28.092348 1971155 cri.go:89] found id: ""
	I0120 14:05:28.092392 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.092427 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:28.092436 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:28.092512 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:28.133751 1971155 cri.go:89] found id: ""
	I0120 14:05:28.133787 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.133796 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:28.133804 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:28.133875 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:28.177231 1971155 cri.go:89] found id: ""
	I0120 14:05:28.177268 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.177280 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:28.177288 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:28.177382 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:28.217125 1971155 cri.go:89] found id: ""
	I0120 14:05:28.217160 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.217175 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:28.217189 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:28.217207 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:28.305446 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:28.305480 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:28.305498 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:28.389940 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:28.389996 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:28.445472 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:28.445519 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:28.503281 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:28.503343 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:31.023457 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:31.039576 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:31.039665 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:31.090049 1971155 cri.go:89] found id: ""
	I0120 14:05:31.090086 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.090099 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:31.090108 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:31.090199 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:31.129134 1971155 cri.go:89] found id: ""
	I0120 14:05:31.129168 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.129180 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:31.129189 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:31.129246 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:31.169790 1971155 cri.go:89] found id: ""
	I0120 14:05:31.169822 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.169834 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:31.169845 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:31.169940 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:31.210981 1971155 cri.go:89] found id: ""
	I0120 14:05:31.211017 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.211030 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:31.211039 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:31.211126 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:31.254051 1971155 cri.go:89] found id: ""
	I0120 14:05:31.254081 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.254089 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:31.254096 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:31.254175 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:31.301717 1971155 cri.go:89] found id: ""
	I0120 14:05:31.301750 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.301772 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:31.301782 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:31.301847 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:31.343204 1971155 cri.go:89] found id: ""
	I0120 14:05:31.343233 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.343242 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:31.343248 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:31.343304 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:31.382466 1971155 cri.go:89] found id: ""
	I0120 14:05:31.382501 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.382512 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:31.382525 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:31.382544 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:31.461732 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:31.461781 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:31.461801 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:31.559483 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:31.559566 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:31.606795 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:31.606833 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:31.661423 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:31.661468 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:33.376770 1970602 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 14:05:33.376853 1970602 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:05:33.376989 1970602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:05:33.377149 1970602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:05:33.377293 1970602 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 14:05:33.377400 1970602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:05:33.378924 1970602 out.go:235]   - Generating certificates and keys ...
	I0120 14:05:33.379025 1970602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:05:33.379104 1970602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:05:33.379208 1970602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:05:33.379307 1970602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:05:33.379417 1970602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:05:33.379524 1970602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:05:33.379607 1970602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:05:33.379717 1970602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:05:33.379839 1970602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:05:33.379966 1970602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:05:33.380043 1970602 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:05:33.380129 1970602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:05:33.380198 1970602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:05:33.380268 1970602 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 14:05:33.380343 1970602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:05:33.380413 1970602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:05:33.380471 1970602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:05:33.380560 1970602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:05:33.380637 1970602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:05:33.382317 1970602 out.go:235]   - Booting up control plane ...
	I0120 14:05:33.382425 1970602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:05:33.382512 1970602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:05:33.382596 1970602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:05:33.382747 1970602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:05:33.382857 1970602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:05:33.382912 1970602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:05:33.383102 1970602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 14:05:33.383280 1970602 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 14:05:33.383370 1970602 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.354939ms
	I0120 14:05:33.383469 1970602 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 14:05:33.383558 1970602 kubeadm.go:310] [api-check] The API server is healthy after 5.504896351s
	I0120 14:05:33.383728 1970602 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 14:05:33.383925 1970602 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 14:05:33.384013 1970602 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 14:05:33.384335 1970602 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-647109 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 14:05:33.384423 1970602 kubeadm.go:310] [bootstrap-token] Using token: lua4mv.z68od0ysi19pmefo
	I0120 14:05:33.386221 1970602 out.go:235]   - Configuring RBAC rules ...
	I0120 14:05:33.386365 1970602 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 14:05:33.386446 1970602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 14:05:33.386593 1970602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 14:05:33.386761 1970602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 14:05:33.386926 1970602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 14:05:33.387058 1970602 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 14:05:33.387208 1970602 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 14:05:33.387276 1970602 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 14:05:33.387343 1970602 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 14:05:33.387355 1970602 kubeadm.go:310] 
	I0120 14:05:33.387441 1970602 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 14:05:33.387450 1970602 kubeadm.go:310] 
	I0120 14:05:33.387576 1970602 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 14:05:33.387589 1970602 kubeadm.go:310] 
	I0120 14:05:33.387627 1970602 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 14:05:33.387678 1970602 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 14:05:33.387738 1970602 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 14:05:33.387748 1970602 kubeadm.go:310] 
	I0120 14:05:33.387843 1970602 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 14:05:33.387853 1970602 kubeadm.go:310] 
	I0120 14:05:33.387930 1970602 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 14:05:33.387939 1970602 kubeadm.go:310] 
	I0120 14:05:33.388012 1970602 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 14:05:33.388091 1970602 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 14:05:33.388156 1970602 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 14:05:33.388160 1970602 kubeadm.go:310] 
	I0120 14:05:33.388249 1970602 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 14:05:33.388325 1970602 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 14:05:33.388332 1970602 kubeadm.go:310] 
	I0120 14:05:33.388404 1970602 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lua4mv.z68od0ysi19pmefo \
	I0120 14:05:33.388491 1970602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 \
	I0120 14:05:33.388524 1970602 kubeadm.go:310] 	--control-plane 
	I0120 14:05:33.388531 1970602 kubeadm.go:310] 
	I0120 14:05:33.388617 1970602 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 14:05:33.388625 1970602 kubeadm.go:310] 
	I0120 14:05:33.388736 1970602 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lua4mv.z68od0ysi19pmefo \
	I0120 14:05:33.388834 1970602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 
	I0120 14:05:33.388846 1970602 cni.go:84] Creating CNI manager for ""
	I0120 14:05:33.388853 1970602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:05:33.390876 1970602 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:05:33.392513 1970602 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:05:33.407354 1970602 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:05:33.428824 1970602 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:05:33.428934 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:33.428977 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-647109 minikube.k8s.io/updated_at=2025_01_20T14_05_33_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=embed-certs-647109 minikube.k8s.io/primary=true
	I0120 14:05:33.473138 1970602 ops.go:34] apiserver oom_adj: -16
	I0120 14:05:33.718712 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:32.526764 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:35.026819 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:34.218762 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:34.719381 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:35.219746 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:35.718888 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:36.218775 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:36.718813 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:37.219353 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:37.393979 1970602 kubeadm.go:1113] duration metric: took 3.965125255s to wait for elevateKubeSystemPrivileges
	I0120 14:05:37.394019 1970602 kubeadm.go:394] duration metric: took 5m3.132880668s to StartCluster
	I0120 14:05:37.394048 1970602 settings.go:142] acquiring lock: {Name:mka51395421b81550a6a78e164d8aa21a9ede347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:05:37.394150 1970602 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:05:37.396378 1970602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:05:37.396706 1970602 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:05:37.396823 1970602 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:05:37.396933 1970602 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-647109"
	I0120 14:05:37.396953 1970602 config.go:182] Loaded profile config "embed-certs-647109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:05:37.396970 1970602 addons.go:69] Setting metrics-server=true in profile "embed-certs-647109"
	I0120 14:05:37.396980 1970602 addons.go:238] Setting addon metrics-server=true in "embed-certs-647109"
	W0120 14:05:37.396988 1970602 addons.go:247] addon metrics-server should already be in state true
	I0120 14:05:37.396987 1970602 addons.go:69] Setting default-storageclass=true in profile "embed-certs-647109"
	I0120 14:05:37.396953 1970602 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-647109"
	I0120 14:05:37.397011 1970602 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-647109"
	W0120 14:05:37.397012 1970602 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:05:37.397041 1970602 host.go:66] Checking if "embed-certs-647109" exists ...
	I0120 14:05:37.397044 1970602 host.go:66] Checking if "embed-certs-647109" exists ...
	I0120 14:05:37.397479 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.397483 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.397495 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.397519 1970602 addons.go:69] Setting dashboard=true in profile "embed-certs-647109"
	I0120 14:05:37.397526 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.397532 1970602 addons.go:238] Setting addon dashboard=true in "embed-certs-647109"
	W0120 14:05:37.397539 1970602 addons.go:247] addon dashboard should already be in state true
	I0120 14:05:37.397563 1970602 host.go:66] Checking if "embed-certs-647109" exists ...
	I0120 14:05:37.397606 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.397785 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.397855 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.397900 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.401795 1970602 out.go:177] * Verifying Kubernetes components...
	I0120 14:05:34.179481 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:34.195424 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:34.195496 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:34.236592 1971155 cri.go:89] found id: ""
	I0120 14:05:34.236623 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.236632 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:34.236639 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:34.236696 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:34.275803 1971155 cri.go:89] found id: ""
	I0120 14:05:34.275836 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.275848 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:34.275855 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:34.275944 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:34.315900 1971155 cri.go:89] found id: ""
	I0120 14:05:34.315932 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.315944 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:34.315952 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:34.316019 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:34.353614 1971155 cri.go:89] found id: ""
	I0120 14:05:34.353646 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.353655 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:34.353661 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:34.353735 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:34.395635 1971155 cri.go:89] found id: ""
	I0120 14:05:34.395673 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.395685 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:34.395698 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:34.395782 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:34.435631 1971155 cri.go:89] found id: ""
	I0120 14:05:34.435662 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.435672 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:34.435678 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:34.435742 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:34.474904 1971155 cri.go:89] found id: ""
	I0120 14:05:34.474940 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.474952 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:34.474960 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:34.475030 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:34.513643 1971155 cri.go:89] found id: ""
	I0120 14:05:34.513675 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.513688 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:34.513701 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:34.513719 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:34.531525 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:34.531559 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:34.614600 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:34.614649 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:34.614667 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:34.691236 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:34.691282 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:34.739567 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:34.739616 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:37.294798 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:37.313219 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:37.313309 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:37.360355 1971155 cri.go:89] found id: ""
	I0120 14:05:37.360392 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.360406 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:37.360415 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:37.360493 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:37.400427 1971155 cri.go:89] found id: ""
	I0120 14:05:37.400456 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.400466 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:37.400475 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:37.400535 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:37.403396 1970602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:05:37.419631 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0120 14:05:37.419631 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40515
	I0120 14:05:37.419751 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0120 14:05:37.420159 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.420340 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.420726 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.420753 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.420870 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.420883 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.421153 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.421286 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.421765 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.421807 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.421859 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.421907 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.423180 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.424356 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43445
	I0120 14:05:37.424853 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.427176 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.427218 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.431273 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.431306 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.431273 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.431590 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.431772 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.432414 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.432463 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.438218 1970602 addons.go:238] Setting addon default-storageclass=true in "embed-certs-647109"
	W0120 14:05:37.438363 1970602 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:05:37.438408 1970602 host.go:66] Checking if "embed-certs-647109" exists ...
	I0120 14:05:37.438859 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.439701 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.444146 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I0120 14:05:37.444576 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41243
	I0120 14:05:37.444773 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.444915 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.445334 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.445367 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.445548 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.445565 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.445846 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.445940 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.446010 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.446155 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.448263 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:05:37.448850 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:05:37.451121 1970602 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:05:37.451145 1970602 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:05:37.452901 1970602 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:05:37.452925 1970602 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:05:37.452946 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:05:37.453029 1970602 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:05:37.453046 1970602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:05:37.453066 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:05:37.457009 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.457306 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:05:37.457323 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.457535 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:05:37.457644 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.457758 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:05:37.457905 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:05:37.458015 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:05:37.458314 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:05:37.458329 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.458460 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:05:37.458637 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:05:37.458741 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:05:37.458835 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:05:37.465409 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44303
	I0120 14:05:37.466031 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.466695 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.466719 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.466964 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41927
	I0120 14:05:37.467498 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.467603 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.468062 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.468085 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.468561 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.468603 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.469079 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.469289 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.471308 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:05:37.473344 1970602 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:05:37.475133 1970602 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:05:37.476628 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:05:37.476660 1970602 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:05:37.476691 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:05:37.480284 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.480952 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:05:37.480993 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.481641 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:05:37.481944 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:05:37.482177 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:05:37.482403 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:05:37.509821 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I0120 14:05:37.510356 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.511017 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.511041 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.511533 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.511923 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.514239 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:05:37.514505 1970602 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:05:37.514525 1970602 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:05:37.514547 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:05:37.518318 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.518891 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:05:37.518919 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.519100 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:05:37.519331 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:05:37.519489 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:05:37.519722 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:05:37.741139 1970602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:05:37.799051 1970602 node_ready.go:35] waiting up to 6m0s for node "embed-certs-647109" to be "Ready" ...
	I0120 14:05:37.809096 1970602 node_ready.go:49] node "embed-certs-647109" has status "Ready":"True"
	I0120 14:05:37.809130 1970602 node_ready.go:38] duration metric: took 10.033158ms for node "embed-certs-647109" to be "Ready" ...
	I0120 14:05:37.809146 1970602 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:37.819590 1970602 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:37.940986 1970602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:05:37.994181 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:05:37.994215 1970602 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:05:38.057795 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:05:38.057828 1970602 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:05:38.074299 1970602 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:05:38.074328 1970602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:05:38.076399 1970602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:05:38.161099 1970602 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:05:38.161133 1970602 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:05:38.172032 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:05:38.172066 1970602 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:05:38.251253 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:05:38.251287 1970602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:05:38.267793 1970602 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:05:38.267823 1970602 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:05:38.300776 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:05:38.300806 1970602 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 14:05:38.438115 1970602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:05:38.438263 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:05:38.438293 1970602 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:05:38.469992 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:05:38.470026 1970602 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:05:38.488178 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:38.488209 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:38.488602 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:38.488624 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:38.488633 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:38.488642 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:38.488642 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:38.488915 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:38.488928 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:38.506460 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:38.506490 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:38.506908 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:38.506932 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:38.506952 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:38.535768 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:05:38.535801 1970602 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 14:05:38.588204 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:05:38.588244 1970602 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 14:05:38.641430 1970602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:05:37.532230 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:40.026877 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:39.322794 1970602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.24634872s)
	I0120 14:05:39.322872 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:39.322888 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:39.323266 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:39.323312 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:39.323332 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:39.323342 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:39.323351 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:39.323616 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:39.323623 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:39.323633 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:39.850519 1970602 pod_ready.go:103] pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:40.002690 1970602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.564518983s)
	I0120 14:05:40.002772 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:40.002791 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:40.003274 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:40.003336 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:40.003360 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:40.003372 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:40.003382 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:40.003762 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:40.003779 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:40.003791 1970602 addons.go:479] Verifying addon metrics-server=true in "embed-certs-647109"
	I0120 14:05:40.003823 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:40.923510 1970602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.282025528s)
	I0120 14:05:40.923577 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:40.923608 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:40.923936 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:40.923983 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:40.924000 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:40.924023 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:40.924034 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:40.924348 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:40.924369 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:40.924375 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:40.926492 1970602 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-647109 addons enable metrics-server
	
	I0120 14:05:40.928141 1970602 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 14:05:37.472778 1971155 cri.go:89] found id: ""
	I0120 14:05:37.472800 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.472807 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:37.472814 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:37.472861 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:37.514813 1971155 cri.go:89] found id: ""
	I0120 14:05:37.514836 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.514846 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:37.514853 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:37.514912 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:37.559689 1971155 cri.go:89] found id: ""
	I0120 14:05:37.559724 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.559735 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:37.559768 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:37.559851 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:37.604249 1971155 cri.go:89] found id: ""
	I0120 14:05:37.604279 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.604291 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:37.604299 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:37.604372 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:37.655652 1971155 cri.go:89] found id: ""
	I0120 14:05:37.655689 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.655702 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:37.655710 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:37.655780 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:37.699626 1971155 cri.go:89] found id: ""
	I0120 14:05:37.699663 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.699677 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:37.699690 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:37.699706 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:37.761041 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:37.761105 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:37.789894 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:37.789933 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:37.870389 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:37.870424 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:37.870444 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:37.953788 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:37.953828 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:40.507832 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:40.526389 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:40.526479 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:40.564969 1971155 cri.go:89] found id: ""
	I0120 14:05:40.565007 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.565019 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:40.565028 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:40.565102 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:40.610815 1971155 cri.go:89] found id: ""
	I0120 14:05:40.610851 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.610863 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:40.610879 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:40.610950 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:40.656202 1971155 cri.go:89] found id: ""
	I0120 14:05:40.656241 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.656253 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:40.656261 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:40.656332 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:40.696520 1971155 cri.go:89] found id: ""
	I0120 14:05:40.696555 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.696567 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:40.696576 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:40.696655 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:40.741177 1971155 cri.go:89] found id: ""
	I0120 14:05:40.741213 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.741224 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:40.741232 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:40.741321 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:40.787423 1971155 cri.go:89] found id: ""
	I0120 14:05:40.787463 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.787476 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:40.787486 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:40.787560 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:40.838180 1971155 cri.go:89] found id: ""
	I0120 14:05:40.838217 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.838227 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:40.838235 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:40.838308 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:40.877888 1971155 cri.go:89] found id: ""
	I0120 14:05:40.877922 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.877934 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:40.877947 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:40.877962 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:40.942664 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:40.942718 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:40.960105 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:40.960147 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:41.038583 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:41.038640 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:41.038660 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:41.125430 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:41.125499 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:40.930035 1970602 addons.go:514] duration metric: took 3.533222189s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 14:05:42.330147 1970602 pod_ready.go:103] pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:43.342012 1970602 pod_ready.go:93] pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.342038 1970602 pod_ready.go:82] duration metric: took 5.522419293s for pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.342050 1970602 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-ndv97" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.359479 1970602 pod_ready.go:93] pod "coredns-668d6bf9bc-ndv97" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.359506 1970602 pod_ready.go:82] duration metric: took 17.448444ms for pod "coredns-668d6bf9bc-ndv97" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.359518 1970602 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.403702 1970602 pod_ready.go:93] pod "etcd-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.403732 1970602 pod_ready.go:82] duration metric: took 44.20711ms for pod "etcd-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.403744 1970602 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.413596 1970602 pod_ready.go:93] pod "kube-apiserver-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.413623 1970602 pod_ready.go:82] duration metric: took 9.873022ms for pod "kube-apiserver-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.413634 1970602 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.421693 1970602 pod_ready.go:93] pod "kube-controller-manager-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.421718 1970602 pod_ready.go:82] duration metric: took 8.077458ms for pod "kube-controller-manager-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.421731 1970602 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-chhpt" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.724510 1970602 pod_ready.go:93] pod "kube-proxy-chhpt" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.724537 1970602 pod_ready.go:82] duration metric: took 302.799519ms for pod "kube-proxy-chhpt" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.724549 1970602 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:42.527349 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:45.026552 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:43.677350 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:43.695745 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:43.695838 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:43.746662 1971155 cri.go:89] found id: ""
	I0120 14:05:43.746695 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.746710 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:43.746718 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:43.746791 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:43.802111 1971155 cri.go:89] found id: ""
	I0120 14:05:43.802142 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.802154 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:43.802163 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:43.802234 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:43.849314 1971155 cri.go:89] found id: ""
	I0120 14:05:43.849351 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.849363 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:43.849371 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:43.849444 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:43.898242 1971155 cri.go:89] found id: ""
	I0120 14:05:43.898279 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.898293 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:43.898302 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:43.898384 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:43.939248 1971155 cri.go:89] found id: ""
	I0120 14:05:43.939282 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.939293 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:43.939302 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:43.939369 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:43.979271 1971155 cri.go:89] found id: ""
	I0120 14:05:43.979307 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.979327 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:43.979336 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:43.979408 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:44.016351 1971155 cri.go:89] found id: ""
	I0120 14:05:44.016387 1971155 logs.go:282] 0 containers: []
	W0120 14:05:44.016400 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:44.016409 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:44.016479 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:44.060965 1971155 cri.go:89] found id: ""
	I0120 14:05:44.061005 1971155 logs.go:282] 0 containers: []
	W0120 14:05:44.061017 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:44.061032 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:44.061050 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:44.076017 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:44.076070 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:44.159732 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:44.159761 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:44.159775 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:44.240721 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:44.240769 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:44.285018 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:44.285061 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:46.839125 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:46.856748 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:46.856841 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:46.908851 1971155 cri.go:89] found id: ""
	I0120 14:05:46.908886 1971155 logs.go:282] 0 containers: []
	W0120 14:05:46.908898 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:46.908909 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:46.908978 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:46.949810 1971155 cri.go:89] found id: ""
	I0120 14:05:46.949865 1971155 logs.go:282] 0 containers: []
	W0120 14:05:46.949879 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:46.949887 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:46.949969 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:46.995158 1971155 cri.go:89] found id: ""
	I0120 14:05:46.995191 1971155 logs.go:282] 0 containers: []
	W0120 14:05:46.995201 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:46.995212 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:46.995284 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:47.034872 1971155 cri.go:89] found id: ""
	I0120 14:05:47.034905 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.034916 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:47.034924 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:47.034993 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:47.077500 1971155 cri.go:89] found id: ""
	I0120 14:05:47.077529 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.077537 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:47.077544 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:47.077608 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:47.118996 1971155 cri.go:89] found id: ""
	I0120 14:05:47.119027 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.119048 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:47.119059 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:47.119126 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:47.159902 1971155 cri.go:89] found id: ""
	I0120 14:05:47.159931 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.159943 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:47.159952 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:47.160027 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:47.201895 1971155 cri.go:89] found id: ""
	I0120 14:05:47.201922 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.201930 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:47.201942 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:47.201957 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:47.244852 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:47.244888 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:47.297439 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:47.297486 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:47.313519 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:47.313558 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:47.389340 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:47.389372 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:47.389391 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:45.324683 1970602 pod_ready.go:93] pod "kube-scheduler-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:45.324712 1970602 pod_ready.go:82] duration metric: took 1.600155124s for pod "kube-scheduler-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:45.324723 1970602 pod_ready.go:39] duration metric: took 7.515564286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:45.324743 1970602 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:05:45.324813 1970602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:45.381331 1970602 api_server.go:72] duration metric: took 7.98457351s to wait for apiserver process to appear ...
	I0120 14:05:45.381368 1970602 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:05:45.381388 1970602 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0120 14:05:45.386523 1970602 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I0120 14:05:45.387477 1970602 api_server.go:141] control plane version: v1.32.0
	I0120 14:05:45.387504 1970602 api_server.go:131] duration metric: took 6.127764ms to wait for apiserver health ...
	I0120 14:05:45.387513 1970602 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:05:45.530457 1970602 system_pods.go:59] 9 kube-system pods found
	I0120 14:05:45.530502 1970602 system_pods.go:61] "coredns-668d6bf9bc-ndbzp" [d43c588e-6fc1-435b-9c9a-8b19201596ae] Running
	I0120 14:05:45.530510 1970602 system_pods.go:61] "coredns-668d6bf9bc-ndv97" [3298cf5d-5983-463b-8aca-792fa1d94241] Running
	I0120 14:05:45.530516 1970602 system_pods.go:61] "etcd-embed-certs-647109" [58f40005-bda9-4a38-8e2a-8e3f4a869c20] Running
	I0120 14:05:45.530521 1970602 system_pods.go:61] "kube-apiserver-embed-certs-647109" [8e188c16-1d56-4972-baf1-20d8dd10f440] Running
	I0120 14:05:45.530527 1970602 system_pods.go:61] "kube-controller-manager-embed-certs-647109" [691924af-9adb-4788-9104-0dcca6ee95f3] Running
	I0120 14:05:45.530532 1970602 system_pods.go:61] "kube-proxy-chhpt" [a0244020-668f-4700-85c2-9562f4d0c920] Running
	I0120 14:05:45.530537 1970602 system_pods.go:61] "kube-scheduler-embed-certs-647109" [6b42ab84-e4cb-4dc8-a4ad-e7da476ec3b2] Running
	I0120 14:05:45.530548 1970602 system_pods.go:61] "metrics-server-f79f97bbb-nqwxp" [68d39045-4c01-40a2-9e8f-0f7734838f0b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:05:45.530559 1970602 system_pods.go:61] "storage-provisioner" [8067c033-4ef4-4945-95b5-f4120df75f5c] Running
	I0120 14:05:45.530574 1970602 system_pods.go:74] duration metric: took 143.054434ms to wait for pod list to return data ...
	I0120 14:05:45.530587 1970602 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:05:45.727314 1970602 default_sa.go:45] found service account: "default"
	I0120 14:05:45.727359 1970602 default_sa.go:55] duration metric: took 196.759471ms for default service account to be created ...
	I0120 14:05:45.727373 1970602 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:05:45.927406 1970602 system_pods.go:87] 9 kube-system pods found
	I0120 14:05:47.027640 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:49.526205 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:49.969003 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:49.983821 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:49.983904 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:50.024496 1971155 cri.go:89] found id: ""
	I0120 14:05:50.024525 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.024536 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:50.024545 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:50.024611 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:50.066376 1971155 cri.go:89] found id: ""
	I0120 14:05:50.066408 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.066416 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:50.066423 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:50.066497 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:50.106918 1971155 cri.go:89] found id: ""
	I0120 14:05:50.107034 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.107055 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:50.107065 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:50.107154 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:50.154846 1971155 cri.go:89] found id: ""
	I0120 14:05:50.154940 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.154962 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:50.154981 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:50.155095 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:50.228177 1971155 cri.go:89] found id: ""
	I0120 14:05:50.228218 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.228238 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:50.228249 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:50.228334 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:50.294106 1971155 cri.go:89] found id: ""
	I0120 14:05:50.294145 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.294158 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:50.294167 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:50.294242 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:50.340312 1971155 cri.go:89] found id: ""
	I0120 14:05:50.340357 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.340368 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:50.340375 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:50.340450 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:50.384031 1971155 cri.go:89] found id: ""
	I0120 14:05:50.384070 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.384082 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:50.384095 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:50.384112 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:50.399361 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:50.399396 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:50.484820 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:50.484851 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:50.484868 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:50.594107 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:50.594171 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:50.647700 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:50.647740 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:51.527819 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:54.026000 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:53.213104 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:53.229463 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:53.229538 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:53.270860 1971155 cri.go:89] found id: ""
	I0120 14:05:53.270896 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.270909 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:53.270917 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:53.270977 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:53.311721 1971155 cri.go:89] found id: ""
	I0120 14:05:53.311748 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.311757 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:53.311764 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:53.311818 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:53.350019 1971155 cri.go:89] found id: ""
	I0120 14:05:53.350053 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.350064 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:53.350073 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:53.350144 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:53.386955 1971155 cri.go:89] found id: ""
	I0120 14:05:53.386982 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.386990 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:53.386996 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:53.387059 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:53.427056 1971155 cri.go:89] found id: ""
	I0120 14:05:53.427096 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.427105 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:53.427112 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:53.427170 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:53.468506 1971155 cri.go:89] found id: ""
	I0120 14:05:53.468546 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.468559 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:53.468568 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:53.468642 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:53.505884 1971155 cri.go:89] found id: ""
	I0120 14:05:53.505926 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.505938 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:53.505948 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:53.506024 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:53.547189 1971155 cri.go:89] found id: ""
	I0120 14:05:53.547232 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.547244 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:53.547258 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:53.547282 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:53.629525 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:53.629559 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:53.629577 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:53.711943 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:53.711994 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:53.761408 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:53.761442 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:53.815735 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:53.815781 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:56.332189 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:56.347525 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:56.347622 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:56.389104 1971155 cri.go:89] found id: ""
	I0120 14:05:56.389145 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.389156 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:56.389165 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:56.389244 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:56.427108 1971155 cri.go:89] found id: ""
	I0120 14:05:56.427151 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.427163 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:56.427173 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:56.427252 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:56.473424 1971155 cri.go:89] found id: ""
	I0120 14:05:56.473457 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.473469 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:56.473477 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:56.473560 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:56.513450 1971155 cri.go:89] found id: ""
	I0120 14:05:56.513485 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.513495 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:56.513504 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:56.513578 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:56.562482 1971155 cri.go:89] found id: ""
	I0120 14:05:56.562533 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.562546 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:56.562554 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:56.562652 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:56.604745 1971155 cri.go:89] found id: ""
	I0120 14:05:56.604776 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.604787 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:56.604795 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:56.604867 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:56.645202 1971155 cri.go:89] found id: ""
	I0120 14:05:56.645245 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.645259 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:56.645268 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:56.645343 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:56.686351 1971155 cri.go:89] found id: ""
	I0120 14:05:56.686379 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.686388 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:56.686405 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:56.686419 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:56.700157 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:56.700206 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:56.780260 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:56.780289 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:56.780306 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:56.859551 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:56.859590 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:56.900940 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:56.900970 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:56.027202 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:58.526277 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:00.527173 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:59.457051 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:59.472587 1971155 kubeadm.go:597] duration metric: took 4m3.227513478s to restartPrimaryControlPlane
	W0120 14:05:59.472685 1971155 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:05:59.472723 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:06:01.310474 1971155 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.837720995s)
	I0120 14:06:01.310572 1971155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:06:01.327408 1971155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:06:01.339235 1971155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:06:01.350183 1971155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:06:01.350209 1971155 kubeadm.go:157] found existing configuration files:
	
	I0120 14:06:01.350259 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:06:01.361183 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:06:01.361270 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:06:01.372352 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:06:01.382976 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:06:01.383040 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:06:01.394492 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:06:01.405628 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:06:01.405694 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:06:01.417040 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:06:01.428807 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:06:01.428872 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:06:01.441345 1971155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:06:01.698918 1971155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:06:03.025832 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:05.026627 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:07.027188 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:09.028290 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:11.031964 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:13.525789 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:15.526985 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:18.026476 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:20.027814 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:22.526030 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:24.526922 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:26.527440 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:28.528148 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:31.026333 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:33.527109 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:36.027336 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:38.526086 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:39.020400 1971324 pod_ready.go:82] duration metric: took 4m0.001084886s for pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace to be "Ready" ...
	E0120 14:06:39.020434 1971324 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 14:06:39.020464 1971324 pod_ready.go:39] duration metric: took 4m13.544546991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:06:39.020512 1971324 kubeadm.go:597] duration metric: took 4m20.388785998s to restartPrimaryControlPlane
	W0120 14:06:39.020594 1971324 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:06:39.020633 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:07:06.810143 1971324 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.789476664s)
	I0120 14:07:06.810247 1971324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:07:06.832457 1971324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:07:06.852749 1971324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:07:06.873857 1971324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:07:06.873882 1971324 kubeadm.go:157] found existing configuration files:
	
	I0120 14:07:06.873943 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0120 14:07:06.886791 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:07:06.886875 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:07:06.909304 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0120 14:07:06.925495 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:07:06.925578 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:07:06.946915 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0120 14:07:06.958045 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:07:06.958118 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:07:06.969792 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0120 14:07:06.980477 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:07:06.980546 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:07:06.992154 1971324 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:07:07.047808 1971324 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 14:07:07.048054 1971324 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:07:07.167444 1971324 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:07:07.167631 1971324 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:07:07.167755 1971324 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 14:07:07.176704 1971324 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:07:07.178906 1971324 out.go:235]   - Generating certificates and keys ...
	I0120 14:07:07.179018 1971324 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:07:07.179096 1971324 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:07:07.179214 1971324 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:07:07.179292 1971324 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:07:07.179407 1971324 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:07:07.179531 1971324 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:07:07.179632 1971324 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:07:07.179728 1971324 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:07:07.179830 1971324 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:07:07.179923 1971324 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:07:07.180006 1971324 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:07:07.180105 1971324 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:07:07.399949 1971324 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:07:07.525338 1971324 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 14:07:07.958528 1971324 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:07:08.085273 1971324 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:07:08.227675 1971324 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:07:08.228174 1971324 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:07:08.230880 1971324 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:07:08.232690 1971324 out.go:235]   - Booting up control plane ...
	I0120 14:07:08.232803 1971324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:07:08.232885 1971324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:07:08.233165 1971324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:07:08.255003 1971324 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:07:08.263855 1971324 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:07:08.263966 1971324 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:07:08.414539 1971324 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 14:07:08.414702 1971324 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 14:07:08.915282 1971324 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.191909ms
	I0120 14:07:08.915410 1971324 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 14:07:14.418359 1971324 kubeadm.go:310] [api-check] The API server is healthy after 5.50145508s
	I0120 14:07:14.430935 1971324 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 14:07:14.460608 1971324 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 14:07:14.497450 1971324 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 14:07:14.497787 1971324 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-727256 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 14:07:14.515343 1971324 kubeadm.go:310] [bootstrap-token] Using token: tkd27p.2n22jx81j70drifi
	I0120 14:07:14.516953 1971324 out.go:235]   - Configuring RBAC rules ...
	I0120 14:07:14.517145 1971324 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 14:07:14.535550 1971324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 14:07:14.549490 1971324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 14:07:14.554516 1971324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 14:07:14.559606 1971324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 14:07:14.567744 1971324 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 14:07:14.823696 1971324 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 14:07:15.255724 1971324 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 14:07:15.828061 1971324 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 14:07:15.829612 1971324 kubeadm.go:310] 
	I0120 14:07:15.829720 1971324 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 14:07:15.829734 1971324 kubeadm.go:310] 
	I0120 14:07:15.829934 1971324 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 14:07:15.829961 1971324 kubeadm.go:310] 
	I0120 14:07:15.829995 1971324 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 14:07:15.830134 1971324 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 14:07:15.830216 1971324 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 14:07:15.830238 1971324 kubeadm.go:310] 
	I0120 14:07:15.830300 1971324 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 14:07:15.830307 1971324 kubeadm.go:310] 
	I0120 14:07:15.830345 1971324 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 14:07:15.830351 1971324 kubeadm.go:310] 
	I0120 14:07:15.830452 1971324 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 14:07:15.830564 1971324 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 14:07:15.830687 1971324 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 14:07:15.830712 1971324 kubeadm.go:310] 
	I0120 14:07:15.830839 1971324 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 14:07:15.830917 1971324 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 14:07:15.830928 1971324 kubeadm.go:310] 
	I0120 14:07:15.831050 1971324 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token tkd27p.2n22jx81j70drifi \
	I0120 14:07:15.831203 1971324 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 \
	I0120 14:07:15.831236 1971324 kubeadm.go:310] 	--control-plane 
	I0120 14:07:15.831250 1971324 kubeadm.go:310] 
	I0120 14:07:15.831373 1971324 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 14:07:15.831381 1971324 kubeadm.go:310] 
	I0120 14:07:15.831510 1971324 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token tkd27p.2n22jx81j70drifi \
	I0120 14:07:15.831608 1971324 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 
	I0120 14:07:15.832608 1971324 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:07:15.832644 1971324 cni.go:84] Creating CNI manager for ""
	I0120 14:07:15.832665 1971324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:07:15.834574 1971324 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:07:15.836200 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:07:15.852486 1971324 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:07:15.883072 1971324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:07:15.883163 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:15.883217 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-727256 minikube.k8s.io/updated_at=2025_01_20T14_07_15_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=default-k8s-diff-port-727256 minikube.k8s.io/primary=true
	I0120 14:07:15.919057 1971324 ops.go:34] apiserver oom_adj: -16
	I0120 14:07:16.264800 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:16.765768 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:17.265700 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:17.765591 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:18.265120 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:18.765375 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:19.265828 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:19.765233 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:19.871124 1971324 kubeadm.go:1113] duration metric: took 3.988031359s to wait for elevateKubeSystemPrivileges
	I0120 14:07:19.871168 1971324 kubeadm.go:394] duration metric: took 5m1.294931591s to StartCluster
	I0120 14:07:19.871195 1971324 settings.go:142] acquiring lock: {Name:mka51395421b81550a6a78e164d8aa21a9ede347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:07:19.871308 1971324 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:07:19.872935 1971324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:07:19.873227 1971324 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:07:19.873360 1971324 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:07:19.873432 1971324 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-727256"
	I0120 14:07:19.873448 1971324 config.go:182] Loaded profile config "default-k8s-diff-port-727256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:07:19.873475 1971324 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-727256"
	I0120 14:07:19.873456 1971324 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-727256"
	W0120 14:07:19.873525 1971324 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:07:19.873515 1971324 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-727256"
	I0120 14:07:19.873512 1971324 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-727256"
	I0120 14:07:19.873579 1971324 host.go:66] Checking if "default-k8s-diff-port-727256" exists ...
	I0120 14:07:19.873591 1971324 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-727256"
	W0120 14:07:19.873602 1971324 addons.go:247] addon dashboard should already be in state true
	I0120 14:07:19.873461 1971324 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-727256"
	I0120 14:07:19.873645 1971324 host.go:66] Checking if "default-k8s-diff-port-727256" exists ...
	I0120 14:07:19.873644 1971324 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-727256"
	W0120 14:07:19.873658 1971324 addons.go:247] addon metrics-server should already be in state true
	I0120 14:07:19.873693 1971324 host.go:66] Checking if "default-k8s-diff-port-727256" exists ...
	I0120 14:07:19.873994 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.874028 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.874069 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.874104 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.874122 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.874160 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.874182 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.874249 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.875156 1971324 out.go:177] * Verifying Kubernetes components...
	I0120 14:07:19.877548 1971324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:07:19.894903 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41867
	I0120 14:07:19.895611 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I0120 14:07:19.895799 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
	I0120 14:07:19.895810 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46707
	I0120 14:07:19.896235 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.896371 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.896374 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.896427 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.896946 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.896965 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.897049 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.897061 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.897097 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.897109 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.897171 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.897179 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.897407 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.897504 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.897554 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.897763 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.897815 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.898170 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.898210 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.898503 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.898556 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.899598 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.899642 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.901013 1971324 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-727256"
	W0120 14:07:19.901024 1971324 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:07:19.901047 1971324 host.go:66] Checking if "default-k8s-diff-port-727256" exists ...
	I0120 14:07:19.901256 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.901294 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.921489 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
	I0120 14:07:19.922200 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.922354 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
	I0120 14:07:19.922487 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0120 14:07:19.923012 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.923115 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.923351 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.923371 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.923750 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.923773 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.923903 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.924012 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.924035 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.924227 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.925245 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.925523 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.926174 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.926409 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.926777 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0120 14:07:19.927338 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.927812 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:07:19.928588 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.928606 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.928749 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:07:19.928849 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:07:19.929144 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.929629 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.929667 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.930118 1971324 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:07:19.931197 1971324 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:07:19.931224 1971324 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:07:19.933008 1971324 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:07:19.933033 1971324 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:07:19.933058 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:07:19.933259 1971324 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:07:19.933369 1971324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:07:19.933389 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:07:19.933347 1971324 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:07:19.934800 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:07:19.934818 1971324 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:07:19.934847 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:07:19.937550 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.937957 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:07:19.937999 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.938124 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:07:19.938295 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:07:19.938406 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:07:19.938486 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:07:19.938817 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.940255 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.940625 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:07:19.940648 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.940917 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:07:19.940993 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:07:19.941018 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.941159 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:07:19.941305 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:07:19.941350 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:07:19.941478 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:07:19.941512 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:07:19.941902 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:07:19.942284 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:07:19.948962 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34819
	I0120 14:07:19.949405 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.949966 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.949989 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.950388 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.950699 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.952288 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:07:19.952507 1971324 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:07:19.952523 1971324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:07:19.952542 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:07:19.956242 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.956713 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:07:19.956743 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.956859 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:07:19.957008 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:07:19.957169 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:07:19.957470 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:07:20.127114 1971324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:07:20.154612 1971324 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-727256" to be "Ready" ...
	I0120 14:07:20.192263 1971324 node_ready.go:49] node "default-k8s-diff-port-727256" has status "Ready":"True"
	I0120 14:07:20.192290 1971324 node_ready.go:38] duration metric: took 37.635597ms for node "default-k8s-diff-port-727256" to be "Ready" ...
	I0120 14:07:20.192301 1971324 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:07:20.213859 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:07:20.213892 1971324 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:07:20.231942 1971324 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:20.258778 1971324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:07:20.282980 1971324 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:07:20.283031 1971324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:07:20.283840 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:07:20.283868 1971324 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:07:20.313871 1971324 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:07:20.313902 1971324 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:07:20.343875 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:07:20.343906 1971324 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:07:20.366130 1971324 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:07:20.366161 1971324 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:07:20.377530 1971324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:07:20.391855 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:07:20.391890 1971324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:07:20.422771 1971324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:07:20.490042 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:07:20.490070 1971324 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 14:07:20.668552 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:20.668581 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:20.668941 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:20.669010 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:20.669026 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:20.669028 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:20.669036 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:20.669363 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:20.669390 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:20.675996 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:20.676026 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:20.676331 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:20.676388 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:20.676354 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:20.680026 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:07:20.680052 1971324 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:07:20.807657 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:07:20.807698 1971324 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:07:20.876039 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:07:20.876068 1971324 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 14:07:20.999452 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:07:20.999483 1971324 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 14:07:21.023485 1971324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:07:21.643979 1971324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266406433s)
	I0120 14:07:21.644056 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:21.644071 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:21.644447 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:21.644477 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:21.644506 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:21.644521 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:21.644831 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:21.644845 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:22.256978 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace has status "Ready":"False"
	I0120 14:07:22.324244 1971324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.901426994s)
	I0120 14:07:22.324341 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:22.324361 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:22.324787 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:22.324849 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:22.324866 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:22.324875 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:22.324883 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:22.325248 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:22.325278 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:22.325285 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:22.325302 1971324 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-727256"
	I0120 14:07:23.339621 1971324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.316057578s)
	I0120 14:07:23.339712 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:23.339732 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:23.340118 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:23.340201 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:23.340216 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:23.340225 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:23.340136 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:23.340517 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:23.342106 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:23.342125 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:23.343861 1971324 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-727256 addons enable metrics-server
	
	I0120 14:07:23.345414 1971324 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 14:07:23.346269 1971324 addons.go:514] duration metric: took 3.472914176s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 14:07:24.739396 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace has status "Ready":"False"
	I0120 14:07:26.739597 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace has status "Ready":"False"
	I0120 14:07:27.738986 1971324 pod_ready.go:93] pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.739017 1971324 pod_ready.go:82] duration metric: took 7.507037469s for pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.739032 1971324 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-v22vm" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.745501 1971324 pod_ready.go:93] pod "coredns-668d6bf9bc-v22vm" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.745528 1971324 pod_ready.go:82] duration metric: took 6.487852ms for pod "coredns-668d6bf9bc-v22vm" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.745540 1971324 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.750780 1971324 pod_ready.go:93] pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.750815 1971324 pod_ready.go:82] duration metric: took 5.263354ms for pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.750829 1971324 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.757357 1971324 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.757387 1971324 pod_ready.go:82] duration metric: took 6.549516ms for pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.757400 1971324 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.763302 1971324 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.763332 1971324 pod_ready.go:82] duration metric: took 5.92298ms for pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.763347 1971324 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6vtjs" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:28.139358 1971324 pod_ready.go:93] pod "kube-proxy-6vtjs" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:28.139385 1971324 pod_ready.go:82] duration metric: took 376.030461ms for pod "kube-proxy-6vtjs" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:28.139395 1971324 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:28.536558 1971324 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:28.536595 1971324 pod_ready.go:82] duration metric: took 397.192361ms for pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:28.536609 1971324 pod_ready.go:39] duration metric: took 8.344296802s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:07:28.536633 1971324 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:07:28.536700 1971324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:07:28.573027 1971324 api_server.go:72] duration metric: took 8.699758175s to wait for apiserver process to appear ...
	I0120 14:07:28.573068 1971324 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:07:28.573101 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:07:28.578383 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0120 14:07:28.579376 1971324 api_server.go:141] control plane version: v1.32.0
	I0120 14:07:28.579402 1971324 api_server.go:131] duration metric: took 6.325441ms to wait for apiserver health ...
	I0120 14:07:28.579413 1971324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:07:28.743059 1971324 system_pods.go:59] 9 kube-system pods found
	I0120 14:07:28.743094 1971324 system_pods.go:61] "coredns-668d6bf9bc-l4rmh" [06f4698d-c393-4f30-b8de-77ade02b575e] Running
	I0120 14:07:28.743100 1971324 system_pods.go:61] "coredns-668d6bf9bc-v22vm" [95644362-4ab9-405f-b433-5b384ab083d1] Running
	I0120 14:07:28.743104 1971324 system_pods.go:61] "etcd-default-k8s-diff-port-727256" [888345c9-ff71-44eb-9501-6a878f6e7fce] Running
	I0120 14:07:28.743108 1971324 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-727256" [2c11d7e2-9f34-4861-977b-7559572c5eb9] Running
	I0120 14:07:28.743111 1971324 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-727256" [f6202668-dca8-46a8-9ac2-d58b96bda552] Running
	I0120 14:07:28.743115 1971324 system_pods.go:61] "kube-proxy-6vtjs" [d57cfd3b-d6bd-4e61-a606-b2451a3768ca] Running
	I0120 14:07:28.743118 1971324 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-727256" [764e1f75-6402-4ce2-9d44-5d8af5dbb0e8] Running
	I0120 14:07:28.743124 1971324 system_pods.go:61] "metrics-server-f79f97bbb-kp5hl" [190513f9-3e9f-4705-ae23-9481987802f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:07:28.743129 1971324 system_pods.go:61] "storage-provisioner" [0f716b6a-f5d2-49a0-a810-e0cdf72a3020] Running
	I0120 14:07:28.743136 1971324 system_pods.go:74] duration metric: took 163.71699ms to wait for pod list to return data ...
	I0120 14:07:28.743145 1971324 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:07:28.937247 1971324 default_sa.go:45] found service account: "default"
	I0120 14:07:28.937280 1971324 default_sa.go:55] duration metric: took 194.12949ms for default service account to be created ...
	I0120 14:07:28.937291 1971324 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:07:29.391088 1971324 system_pods.go:87] 9 kube-system pods found
	I0120 14:07:57.893064 1971155 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 14:07:57.893206 1971155 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 14:07:57.895047 1971155 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 14:07:57.895110 1971155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:07:57.895204 1971155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:07:57.895358 1971155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:07:57.895455 1971155 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 14:07:57.895510 1971155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:07:57.897667 1971155 out.go:235]   - Generating certificates and keys ...
	I0120 14:07:57.897769 1971155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:07:57.897859 1971155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:07:57.897979 1971155 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:07:57.898089 1971155 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:07:57.898184 1971155 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:07:57.898261 1971155 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:07:57.898370 1971155 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:07:57.898473 1971155 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:07:57.898549 1971155 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:07:57.898650 1971155 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:07:57.898706 1971155 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:07:57.898808 1971155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:07:57.898866 1971155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:07:57.898917 1971155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:07:57.898971 1971155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:07:57.899018 1971155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:07:57.899156 1971155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:07:57.899270 1971155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:07:57.899322 1971155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:07:57.899385 1971155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:07:57.900907 1971155 out.go:235]   - Booting up control plane ...
	I0120 14:07:57.901012 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:07:57.901098 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:07:57.901183 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:07:57.901301 1971155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:07:57.901498 1971155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 14:07:57.901549 1971155 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 14:07:57.901614 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.901802 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.901862 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.902008 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.902071 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.902248 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.902332 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.902476 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.902532 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.902723 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.902740 1971155 kubeadm.go:310] 
	I0120 14:07:57.902798 1971155 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 14:07:57.902913 1971155 kubeadm.go:310] 		timed out waiting for the condition
	I0120 14:07:57.902942 1971155 kubeadm.go:310] 
	I0120 14:07:57.902990 1971155 kubeadm.go:310] 	This error is likely caused by:
	I0120 14:07:57.903050 1971155 kubeadm.go:310] 		- The kubelet is not running
	I0120 14:07:57.903175 1971155 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 14:07:57.903185 1971155 kubeadm.go:310] 
	I0120 14:07:57.903309 1971155 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 14:07:57.903358 1971155 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 14:07:57.903406 1971155 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 14:07:57.903415 1971155 kubeadm.go:310] 
	I0120 14:07:57.903535 1971155 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 14:07:57.903608 1971155 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 14:07:57.903614 1971155 kubeadm.go:310] 
	I0120 14:07:57.903742 1971155 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 14:07:57.903828 1971155 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 14:07:57.903894 1971155 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 14:07:57.903959 1971155 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 14:07:57.903970 1971155 kubeadm.go:310] 
	W0120 14:07:57.904147 1971155 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0120 14:07:57.904205 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:07:58.379343 1971155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:07:58.394094 1971155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:07:58.405184 1971155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:07:58.405214 1971155 kubeadm.go:157] found existing configuration files:
	
	I0120 14:07:58.405275 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:07:58.415126 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:07:58.415190 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:07:58.425525 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:07:58.435286 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:07:58.435402 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:07:58.445346 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:07:58.455338 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:07:58.455400 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:07:58.465346 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:07:58.474739 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:07:58.474821 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:07:58.484664 1971155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:07:58.559434 1971155 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 14:07:58.559546 1971155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:07:58.713832 1971155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:07:58.713978 1971155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:07:58.714110 1971155 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 14:07:58.902142 1971155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:07:58.904151 1971155 out.go:235]   - Generating certificates and keys ...
	I0120 14:07:58.904252 1971155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:07:58.904340 1971155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:07:58.904451 1971155 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:07:58.904532 1971155 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:07:58.904662 1971155 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:07:58.904752 1971155 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:07:58.904850 1971155 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:07:58.904938 1971155 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:07:58.905078 1971155 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:07:58.905203 1971155 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:07:58.905255 1971155 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:07:58.905311 1971155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:07:59.059284 1971155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:07:59.367307 1971155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:07:59.478773 1971155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:07:59.769599 1971155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:07:59.795017 1971155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:07:59.796387 1971155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:07:59.796440 1971155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:07:59.967182 1971155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:07:59.969049 1971155 out.go:235]   - Booting up control plane ...
	I0120 14:07:59.969210 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:07:59.969498 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:07:59.978995 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:07:59.980298 1971155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:07:59.983629 1971155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 14:08:39.986873 1971155 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 14:08:39.986972 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:08:39.987222 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:08:44.987592 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:08:44.987868 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:08:54.988530 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:08:54.988725 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:09:14.990244 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:09:14.990492 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:09:54.990993 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:09:54.991340 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:09:54.991370 1971155 kubeadm.go:310] 
	I0120 14:09:54.991419 1971155 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 14:09:54.991474 1971155 kubeadm.go:310] 		timed out waiting for the condition
	I0120 14:09:54.991485 1971155 kubeadm.go:310] 
	I0120 14:09:54.991536 1971155 kubeadm.go:310] 	This error is likely caused by:
	I0120 14:09:54.991582 1971155 kubeadm.go:310] 		- The kubelet is not running
	I0120 14:09:54.991734 1971155 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 14:09:54.991760 1971155 kubeadm.go:310] 
	I0120 14:09:54.991930 1971155 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 14:09:54.991981 1971155 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 14:09:54.992034 1971155 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 14:09:54.992065 1971155 kubeadm.go:310] 
	I0120 14:09:54.992234 1971155 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 14:09:54.992326 1971155 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 14:09:54.992342 1971155 kubeadm.go:310] 
	I0120 14:09:54.992508 1971155 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 14:09:54.992650 1971155 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 14:09:54.992786 1971155 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 14:09:54.992894 1971155 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 14:09:54.992904 1971155 kubeadm.go:310] 
	I0120 14:09:54.994025 1971155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:09:54.994123 1971155 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 14:09:54.994214 1971155 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 14:09:54.994325 1971155 kubeadm.go:394] duration metric: took 7m58.806679255s to StartCluster
	I0120 14:09:54.994398 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:09:54.994475 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:09:55.044299 1971155 cri.go:89] found id: ""
	I0120 14:09:55.044338 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.044350 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:09:55.044359 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:09:55.044434 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:09:55.088726 1971155 cri.go:89] found id: ""
	I0120 14:09:55.088759 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.088767 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:09:55.088774 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:09:55.088848 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:09:55.127484 1971155 cri.go:89] found id: ""
	I0120 14:09:55.127513 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.127523 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:09:55.127531 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:09:55.127602 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:09:55.167042 1971155 cri.go:89] found id: ""
	I0120 14:09:55.167079 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.167091 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:09:55.167100 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:09:55.167173 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:09:55.206075 1971155 cri.go:89] found id: ""
	I0120 14:09:55.206111 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.206122 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:09:55.206128 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:09:55.206184 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:09:55.262849 1971155 cri.go:89] found id: ""
	I0120 14:09:55.262895 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.262907 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:09:55.262917 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:09:55.262989 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:09:55.303064 1971155 cri.go:89] found id: ""
	I0120 14:09:55.303102 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.303114 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:09:55.303122 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:09:55.303190 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:09:55.339202 1971155 cri.go:89] found id: ""
	I0120 14:09:55.339237 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.339248 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:09:55.339262 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:09:55.339279 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:09:55.425991 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:09:55.426022 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:09:55.426042 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:09:55.529413 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:09:55.529454 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:09:55.574927 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:09:55.574965 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:09:55.631464 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:09:55.631507 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0120 14:09:55.647055 1971155 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0120 14:09:55.647121 1971155 out.go:270] * 
	W0120 14:09:55.647197 1971155 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 14:09:55.647230 1971155 out.go:270] * 
	W0120 14:09:55.648431 1971155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 14:09:55.652120 1971155 out.go:201] 
	W0120 14:09:55.653811 1971155 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 14:09:55.653880 1971155 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0120 14:09:55.653909 1971155 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0120 14:09:55.655598 1971155 out.go:201] 
	
	
	==> CRI-O <==
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.002129305Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737382740002108302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b82a278-3194-42cc-aa1b-69a636667765 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.003013680Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f782241-ec2e-4414-af2c-cc070c87b799 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.003089920Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f782241-ec2e-4414-af2c-cc070c87b799 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.003133712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5f782241-ec2e-4414-af2c-cc070c87b799 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.038567852Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=60ca76bd-0ca8-41b2-9c64-ebf778ee1325 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.038702641Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60ca76bd-0ca8-41b2-9c64-ebf778ee1325 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.040066381Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fc186e0b-e826-453f-8d7b-e232d8e4710d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.040466603Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737382740040446364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc186e0b-e826-453f-8d7b-e232d8e4710d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.041047541Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5cc7ecc5-a024-4130-bacc-03f3f34ef84b name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.041090969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5cc7ecc5-a024-4130-bacc-03f3f34ef84b name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.041121703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5cc7ecc5-a024-4130-bacc-03f3f34ef84b name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.074498809Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09e39298-1997-4d34-a0a7-de75c1e5332e name=/runtime.v1.RuntimeService/Version
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.074572768Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09e39298-1997-4d34-a0a7-de75c1e5332e name=/runtime.v1.RuntimeService/Version
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.075709011Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8182a51b-a368-4519-a2ea-09e074ea964f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.076141910Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737382740076115656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8182a51b-a368-4519-a2ea-09e074ea964f name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.076810050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=603c9f07-2541-4f6e-8ffd-3414efc148b9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.076857105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=603c9f07-2541-4f6e-8ffd-3414efc148b9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.076886211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=603c9f07-2541-4f6e-8ffd-3414efc148b9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.112180039Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff5fcb55-3ab0-49d9-87f9-f5ef4912eaff name=/runtime.v1.RuntimeService/Version
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.112265238Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff5fcb55-3ab0-49d9-87f9-f5ef4912eaff name=/runtime.v1.RuntimeService/Version
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.113603997Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5d6c0307-9ebe-4db3-90b2-e035c2adcc48 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.114073586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737382740114044465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5d6c0307-9ebe-4db3-90b2-e035c2adcc48 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.114757499Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f60d5f00-1fe3-4b85-93c3-82c94b4a791f name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.114826858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f60d5f00-1fe3-4b85-93c3-82c94b4a791f name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:19:00 old-k8s-version-191446 crio[632]: time="2025-01-20 14:19:00.114873529Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f60d5f00-1fe3-4b85-93c3-82c94b4a791f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan20 14:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057095] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044027] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.065658] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.960481] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.662559] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.919769] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.062908] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080713] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.237108] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.143710] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.284512] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.705620] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +0.060994] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.033318] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[Jan20 14:02] kauditd_printk_skb: 46 callbacks suppressed
	[Jan20 14:06] systemd-fstab-generator[5027]: Ignoring "noauto" option for root device
	[Jan20 14:07] systemd-fstab-generator[5311]: Ignoring "noauto" option for root device
	[  +0.070549] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:19:00 up 17 min,  0 users,  load average: 0.01, 0.04, 0.06
	Linux old-k8s-version-191446 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 20 14:18:57 old-k8s-version-191446 kubelet[6478]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Jan 20 14:18:57 old-k8s-version-191446 kubelet[6478]: goroutine 150 [runnable]:
	Jan 20 14:18:57 old-k8s-version-191446 kubelet[6478]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000db4000)
	Jan 20 14:18:57 old-k8s-version-191446 kubelet[6478]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Jan 20 14:18:57 old-k8s-version-191446 kubelet[6478]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 20 14:18:57 old-k8s-version-191446 kubelet[6478]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Jan 20 14:18:57 old-k8s-version-191446 kubelet[6478]: goroutine 151 [select]:
	Jan 20 14:18:57 old-k8s-version-191446 kubelet[6478]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc00072bb30, 0xc000c72a01, 0xc000883980, 0xc000c5a6b0, 0xc000c7e200, 0xc000c7e1c0)
	Jan 20 14:18:57 old-k8s-version-191446 kubelet[6478]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Jan 20 14:18:57 old-k8s-version-191446 kubelet[6478]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000c72a80, 0x0, 0x0)
	Jan 20 14:18:57 old-k8s-version-191446 kubelet[6478]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Jan 20 14:18:57 old-k8s-version-191446 kubelet[6478]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000db4000)
	Jan 20 14:18:57 old-k8s-version-191446 kubelet[6478]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jan 20 14:18:57 old-k8s-version-191446 kubelet[6478]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jan 20 14:18:57 old-k8s-version-191446 kubelet[6478]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jan 20 14:18:57 old-k8s-version-191446 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 20 14:18:57 old-k8s-version-191446 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 20 14:18:58 old-k8s-version-191446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jan 20 14:18:58 old-k8s-version-191446 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 20 14:18:58 old-k8s-version-191446 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 20 14:18:58 old-k8s-version-191446 kubelet[6487]: I0120 14:18:58.208899    6487 server.go:416] Version: v1.20.0
	Jan 20 14:18:58 old-k8s-version-191446 kubelet[6487]: I0120 14:18:58.209298    6487 server.go:837] Client rotation is on, will bootstrap in background
	Jan 20 14:18:58 old-k8s-version-191446 kubelet[6487]: I0120 14:18:58.211444    6487 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 20 14:18:58 old-k8s-version-191446 kubelet[6487]: W0120 14:18:58.212602    6487 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jan 20 14:18:58 old-k8s-version-191446 kubelet[6487]: I0120 14:18:58.212820    6487 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191446 -n old-k8s-version-191446
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191446 -n old-k8s-version-191446: exit status 2 (260.796154ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-191446" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (381.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
E0120 14:20:39.684318 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
E0120 14:21:26.556637 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
E0120 14:22:36.597470 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.215:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191446 -n old-k8s-version-191446
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191446 -n old-k8s-version-191446: exit status 2 (265.523602ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-191446" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-191446 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-191446 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.744µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-191446 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191446 -n old-k8s-version-191446
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191446 -n old-k8s-version-191446: exit status 2 (252.199633ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-191446 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-191446 logs -n 25: (1.295998656s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:56 UTC | 20 Jan 25 13:57 UTC |
	| start   | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-038404                              | cert-expiration-038404       | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:57 UTC |
	| start   | -p embed-certs-647109                                  | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-648067             | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-648067                                   | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:57 UTC | 20 Jan 25 13:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-377526                           | kubernetes-upgrade-377526    | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 13:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-955986 | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 13:58 UTC |
	|         | disable-driver-mounts-955986                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 13:59 UTC |
	|         | default-k8s-diff-port-727256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-647109            | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 13:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-647109                                  | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 13:58 UTC | 20 Jan 25 14:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-648067                  | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC | 20 Jan 25 13:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-648067                                   | no-preload-648067            | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-191446        | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-727256  | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC | 20 Jan 25 13:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 13:59 UTC | 20 Jan 25 14:01 UTC |
	|         | default-k8s-diff-port-727256                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-647109                 | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 14:00 UTC | 20 Jan 25 14:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-647109                                  | embed-certs-647109           | jenkins | v1.35.0 | 20 Jan 25 14:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-191446                              | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC | 20 Jan 25 14:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-191446             | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC | 20 Jan 25 14:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-191446                              | old-k8s-version-191446       | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-727256       | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC | 20 Jan 25 14:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-727256 | jenkins | v1.35.0 | 20 Jan 25 14:01 UTC |                     |
	|         | default-k8s-diff-port-727256                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 14:01:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 14:01:30.648649 1971324 out.go:345] Setting OutFile to fd 1 ...
	I0120 14:01:30.648768 1971324 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:01:30.648777 1971324 out.go:358] Setting ErrFile to fd 2...
	I0120 14:01:30.648781 1971324 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 14:01:30.648971 1971324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 14:01:30.649563 1971324 out.go:352] Setting JSON to false
	I0120 14:01:30.650677 1971324 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":20637,"bootTime":1737361054,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 14:01:30.650808 1971324 start.go:139] virtualization: kvm guest
	I0120 14:01:30.653087 1971324 out.go:177] * [default-k8s-diff-port-727256] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 14:01:30.654902 1971324 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 14:01:30.654958 1971324 notify.go:220] Checking for updates...
	I0120 14:01:30.657200 1971324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 14:01:30.658358 1971324 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:01:30.659540 1971324 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 14:01:30.660755 1971324 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 14:01:30.662124 1971324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 14:01:30.664066 1971324 config.go:182] Loaded profile config "default-k8s-diff-port-727256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:01:30.664694 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:01:30.664783 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:01:30.683363 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46503
	I0120 14:01:30.684660 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:01:30.685421 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:01:30.685453 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:01:30.685849 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:01:30.686136 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:01:30.686482 1971324 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 14:01:30.686962 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:01:30.687017 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:01:30.705214 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33219
	I0120 14:01:30.705778 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:01:30.706464 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:01:30.706496 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:01:30.706910 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:01:30.707413 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:01:30.748140 1971324 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 14:01:30.749542 1971324 start.go:297] selected driver: kvm2
	I0120 14:01:30.749575 1971324 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-727256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8
s-diff-port-727256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:01:30.749732 1971324 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 14:01:30.750471 1971324 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:01:30.750569 1971324 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-1920423/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 14:01:30.769419 1971324 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 14:01:30.769920 1971324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0120 14:01:30.769962 1971324 cni.go:84] Creating CNI manager for ""
	I0120 14:01:30.770026 1971324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:01:30.770087 1971324 start.go:340] cluster config:
	{Name:default-k8s-diff-port-727256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-727256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:01:30.770203 1971324 iso.go:125] acquiring lock: {Name:mk18c81b0efa5cc8efe9d47dc52684752f206ae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 14:01:30.772094 1971324 out.go:177] * Starting "default-k8s-diff-port-727256" primary control-plane node in "default-k8s-diff-port-727256" cluster
	I0120 14:01:27.567956 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .Start
	I0120 14:01:27.568241 1971155 main.go:141] libmachine: (old-k8s-version-191446) starting domain...
	I0120 14:01:27.568273 1971155 main.go:141] libmachine: (old-k8s-version-191446) ensuring networks are active...
	I0120 14:01:27.569283 1971155 main.go:141] libmachine: (old-k8s-version-191446) Ensuring network default is active
	I0120 14:01:27.569742 1971155 main.go:141] libmachine: (old-k8s-version-191446) Ensuring network mk-old-k8s-version-191446 is active
	I0120 14:01:27.570107 1971155 main.go:141] libmachine: (old-k8s-version-191446) getting domain XML...
	I0120 14:01:27.570794 1971155 main.go:141] libmachine: (old-k8s-version-191446) creating domain...
	I0120 14:01:28.844259 1971155 main.go:141] libmachine: (old-k8s-version-191446) waiting for IP...
	I0120 14:01:28.845169 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:28.845736 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:28.845869 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:28.845749 1971190 retry.go:31] will retry after 249.093991ms: waiting for domain to come up
	I0120 14:01:29.096266 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:29.096835 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:29.096870 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:29.096778 1971190 retry.go:31] will retry after 285.937419ms: waiting for domain to come up
	I0120 14:01:29.384654 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:29.385227 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:29.385260 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:29.385184 1971190 retry.go:31] will retry after 403.444594ms: waiting for domain to come up
	I0120 14:01:29.789819 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:29.790466 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:29.790516 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:29.790442 1971190 retry.go:31] will retry after 525.904837ms: waiting for domain to come up
	I0120 14:01:30.361342 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:30.361758 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:30.361799 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:30.361742 1971190 retry.go:31] will retry after 498.844656ms: waiting for domain to come up
	I0120 14:01:30.862672 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:30.863328 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:30.863359 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:30.863284 1971190 retry.go:31] will retry after 695.176765ms: waiting for domain to come up
	I0120 14:01:31.559994 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:31.560418 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:31.560483 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:31.560423 1971190 retry.go:31] will retry after 1.138767233s: waiting for domain to come up
	I0120 14:01:29.280435 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:31.281034 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:33.778046 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:32.686925 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:35.185223 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:30.773441 1971324 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 14:01:30.773503 1971324 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0120 14:01:30.773514 1971324 cache.go:56] Caching tarball of preloaded images
	I0120 14:01:30.773638 1971324 preload.go:172] Found /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0120 14:01:30.773650 1971324 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0120 14:01:30.773755 1971324 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/config.json ...
	I0120 14:01:30.774002 1971324 start.go:360] acquireMachinesLock for default-k8s-diff-port-727256: {Name:mkbf148c6fec0c722eed081be3cef9d0990100c3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0120 14:01:32.700822 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:32.701293 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:32.701323 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:32.701238 1971190 retry.go:31] will retry after 1.039348308s: waiting for domain to come up
	I0120 14:01:33.742152 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:33.742798 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:33.742827 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:33.742756 1971190 retry.go:31] will retry after 1.487881975s: waiting for domain to come up
	I0120 14:01:35.232385 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:35.232903 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:35.233000 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:35.232883 1971190 retry.go:31] will retry after 1.541170209s: waiting for domain to come up
	I0120 14:01:36.775877 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:36.776558 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:36.776586 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:36.776513 1971190 retry.go:31] will retry after 2.896053576s: waiting for domain to come up
	I0120 14:01:35.778385 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:37.778939 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:37.187266 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:39.686105 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:39.675363 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:39.675986 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:39.676021 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:39.675945 1971190 retry.go:31] will retry after 3.105341623s: waiting for domain to come up
	I0120 14:01:39.779284 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:42.278570 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:42.185136 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:44.686564 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:42.783450 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:42.783953 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | unable to find current IP address of domain old-k8s-version-191446 in network mk-old-k8s-version-191446
	I0120 14:01:42.783979 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | I0120 14:01:42.783919 1971190 retry.go:31] will retry after 3.216558184s: waiting for domain to come up
	I0120 14:01:46.001813 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.002358 1971155 main.go:141] libmachine: (old-k8s-version-191446) found domain IP: 192.168.61.215
	I0120 14:01:46.002386 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has current primary IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.002392 1971155 main.go:141] libmachine: (old-k8s-version-191446) reserving static IP address...
	I0120 14:01:46.002890 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "old-k8s-version-191446", mac: "52:54:00:87:83:fb", ip: "192.168.61.215"} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.002913 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | skip adding static IP to network mk-old-k8s-version-191446 - found existing host DHCP lease matching {name: "old-k8s-version-191446", mac: "52:54:00:87:83:fb", ip: "192.168.61.215"}
	I0120 14:01:46.002961 1971155 main.go:141] libmachine: (old-k8s-version-191446) reserved static IP address 192.168.61.215 for domain old-k8s-version-191446
	I0120 14:01:46.003012 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | Getting to WaitForSSH function...
	I0120 14:01:46.003029 1971155 main.go:141] libmachine: (old-k8s-version-191446) waiting for SSH...
	I0120 14:01:46.005479 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.005822 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.005844 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.005930 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | Using SSH client type: external
	I0120 14:01:46.005974 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa (-rw-------)
	I0120 14:01:46.006012 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 14:01:46.006030 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | About to run SSH command:
	I0120 14:01:46.006042 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | exit 0
	I0120 14:01:46.134861 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | SSH cmd err, output: <nil>: 
	I0120 14:01:46.135287 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetConfigRaw
	I0120 14:01:46.135993 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 14:01:46.138498 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.138913 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.138949 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.139408 1971155 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/config.json ...
	I0120 14:01:46.139628 1971155 machine.go:93] provisionDockerMachine start ...
	I0120 14:01:46.139648 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:46.139910 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.142776 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.143168 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.143196 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.143377 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.143551 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.143710 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.143884 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.144084 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:46.144287 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:46.144299 1971155 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 14:01:46.259874 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 14:01:46.259909 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetMachineName
	I0120 14:01:46.260184 1971155 buildroot.go:166] provisioning hostname "old-k8s-version-191446"
	I0120 14:01:46.260218 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetMachineName
	I0120 14:01:46.260442 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.263109 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.263469 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.263498 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.263608 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.263809 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.263964 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.264115 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.264263 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:46.264566 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:46.264598 1971155 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-191446 && echo "old-k8s-version-191446" | sudo tee /etc/hostname
	I0120 14:01:46.390733 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-191446
	
	I0120 14:01:46.390778 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.394086 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.394452 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.394495 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.394665 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.394902 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.395120 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.395312 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.395484 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:46.395721 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:46.395742 1971155 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-191446' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-191446/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-191446' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 14:01:46.517398 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 14:01:46.517429 1971155 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 14:01:46.517474 1971155 buildroot.go:174] setting up certificates
	I0120 14:01:46.517489 1971155 provision.go:84] configureAuth start
	I0120 14:01:46.517501 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetMachineName
	I0120 14:01:46.517852 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 14:01:46.520852 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.521243 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.521276 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.521419 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.523721 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.524173 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.524216 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.524323 1971155 provision.go:143] copyHostCerts
	I0120 14:01:46.524385 1971155 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem, removing ...
	I0120 14:01:46.524406 1971155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem
	I0120 14:01:46.524505 1971155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 14:01:46.524641 1971155 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem, removing ...
	I0120 14:01:46.524653 1971155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem
	I0120 14:01:46.524681 1971155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 14:01:46.524749 1971155 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem, removing ...
	I0120 14:01:46.524756 1971155 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem
	I0120 14:01:46.524777 1971155 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 14:01:46.524823 1971155 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-191446 san=[127.0.0.1 192.168.61.215 localhost minikube old-k8s-version-191446]
	I0120 14:01:46.780575 1971155 provision.go:177] copyRemoteCerts
	I0120 14:01:46.780653 1971155 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 14:01:46.780692 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.783791 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.784174 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.784204 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.784390 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.784667 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.784947 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.785129 1971155 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 14:01:46.873537 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 14:01:46.906323 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0120 14:01:46.934595 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0120 14:01:46.963136 1971155 provision.go:87] duration metric: took 445.630599ms to configureAuth
	I0120 14:01:46.963175 1971155 buildroot.go:189] setting minikube options for container-runtime
	I0120 14:01:46.963391 1971155 config.go:182] Loaded profile config "old-k8s-version-191446": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0120 14:01:46.963495 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:46.966539 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.966917 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:46.966953 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:46.967102 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:46.967316 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.967488 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:46.967694 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:46.967860 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:46.968110 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:46.968140 1971155 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 14:01:47.221729 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 14:01:47.221758 1971155 machine.go:96] duration metric: took 1.082115997s to provisionDockerMachine
	I0120 14:01:47.221770 1971155 start.go:293] postStartSetup for "old-k8s-version-191446" (driver="kvm2")
	I0120 14:01:47.221780 1971155 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 14:01:47.221801 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.222156 1971155 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 14:01:47.222213 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:47.225564 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.226024 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.226063 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.226226 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:47.226479 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.226678 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:47.226841 1971155 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 14:01:47.315044 1971155 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 14:01:47.319600 1971155 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 14:01:47.319630 1971155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 14:01:47.319699 1971155 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 14:01:47.319785 1971155 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem -> 19276722.pem in /etc/ssl/certs
	I0120 14:01:47.319880 1971155 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 14:01:47.331251 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:01:47.359102 1971155 start.go:296] duration metric: took 137.311216ms for postStartSetup
	I0120 14:01:47.359156 1971155 fix.go:56] duration metric: took 19.814283548s for fixHost
	I0120 14:01:47.359184 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:47.362176 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.362643 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.362680 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.362916 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:47.363161 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.363352 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.363520 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:47.363693 1971155 main.go:141] libmachine: Using SSH client type: native
	I0120 14:01:47.363932 1971155 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.61.215 22 <nil> <nil>}
	I0120 14:01:47.363948 1971155 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 14:01:47.480212 1971324 start.go:364] duration metric: took 16.706172443s to acquireMachinesLock for "default-k8s-diff-port-727256"
	I0120 14:01:47.480300 1971324 start.go:96] Skipping create...Using existing machine configuration
	I0120 14:01:47.480313 1971324 fix.go:54] fixHost starting: 
	I0120 14:01:47.480706 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:01:47.480762 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:01:47.499438 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0120 14:01:47.499966 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:01:47.500523 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:01:47.500551 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:01:47.501028 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:01:47.501254 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:01:47.501413 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:01:47.503562 1971324 fix.go:112] recreateIfNeeded on default-k8s-diff-port-727256: state=Stopped err=<nil>
	I0120 14:01:47.503596 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	W0120 14:01:47.503774 1971324 fix.go:138] unexpected machine state, will restart: <nil>
	I0120 14:01:47.505539 1971324 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-727256" ...
	I0120 14:01:44.778211 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:47.279184 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:47.480011 1971155 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381707.434903722
	
	I0120 14:01:47.480050 1971155 fix.go:216] guest clock: 1737381707.434903722
	I0120 14:01:47.480061 1971155 fix.go:229] Guest: 2025-01-20 14:01:47.434903722 +0000 UTC Remote: 2025-01-20 14:01:47.359160605 +0000 UTC m=+19.980745135 (delta=75.743117ms)
	I0120 14:01:47.480090 1971155 fix.go:200] guest clock delta is within tolerance: 75.743117ms
	I0120 14:01:47.480098 1971155 start.go:83] releasing machines lock for "old-k8s-version-191446", held for 19.935238773s
	I0120 14:01:47.480132 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.480450 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 14:01:47.483367 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.483792 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.483828 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.483945 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.484435 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.484606 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .DriverName
	I0120 14:01:47.484699 1971155 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 14:01:47.484761 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:47.484899 1971155 ssh_runner.go:195] Run: cat /version.json
	I0120 14:01:47.484929 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHHostname
	I0120 14:01:47.487568 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.487980 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.488011 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.488093 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.488211 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:47.488434 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.488591 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:47.488630 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:47.488653 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:47.488741 1971155 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 14:01:47.488862 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHPort
	I0120 14:01:47.489009 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHKeyPath
	I0120 14:01:47.489153 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetSSHUsername
	I0120 14:01:47.489343 1971155 sshutil.go:53] new ssh client: &{IP:192.168.61.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/old-k8s-version-191446/id_rsa Username:docker}
	I0120 14:01:47.608326 1971155 ssh_runner.go:195] Run: systemctl --version
	I0120 14:01:47.614709 1971155 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 14:01:47.772139 1971155 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 14:01:47.780427 1971155 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 14:01:47.780502 1971155 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 14:01:47.798266 1971155 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 14:01:47.798304 1971155 start.go:495] detecting cgroup driver to use...
	I0120 14:01:47.798398 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 14:01:47.815867 1971155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 14:01:47.835855 1971155 docker.go:217] disabling cri-docker service (if available) ...
	I0120 14:01:47.835918 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 14:01:47.853481 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 14:01:47.869379 1971155 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 14:01:47.988401 1971155 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 14:01:48.193315 1971155 docker.go:233] disabling docker service ...
	I0120 14:01:48.193390 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 14:01:48.214201 1971155 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 14:01:48.230964 1971155 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 14:01:48.377733 1971155 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 14:01:48.516198 1971155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 14:01:48.533486 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 14:01:48.557115 1971155 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0120 14:01:48.557197 1971155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:01:48.570080 1971155 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 14:01:48.570162 1971155 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:01:48.584225 1971155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:01:48.596995 1971155 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:01:48.609663 1971155 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 14:01:48.623942 1971155 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 14:01:48.637099 1971155 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 14:01:48.637171 1971155 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 14:01:48.653873 1971155 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 14:01:48.666171 1971155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:01:48.807308 1971155 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 14:01:48.914634 1971155 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 14:01:48.914731 1971155 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 14:01:48.920471 1971155 start.go:563] Will wait 60s for crictl version
	I0120 14:01:48.920558 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:48.924644 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 14:01:48.966008 1971155 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 14:01:48.966111 1971155 ssh_runner.go:195] Run: crio --version
	I0120 14:01:48.995639 1971155 ssh_runner.go:195] Run: crio --version
	I0120 14:01:49.031088 1971155 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0120 14:01:47.185914 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:49.187141 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:47.506801 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Start
	I0120 14:01:47.507007 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) starting domain...
	I0120 14:01:47.507037 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) ensuring networks are active...
	I0120 14:01:47.507737 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Ensuring network default is active
	I0120 14:01:47.508168 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Ensuring network mk-default-k8s-diff-port-727256 is active
	I0120 14:01:47.508707 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) getting domain XML...
	I0120 14:01:47.509515 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) creating domain...
	I0120 14:01:48.889668 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) waiting for IP...
	I0120 14:01:48.890857 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:48.891526 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:48.891694 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:48.891527 1971420 retry.go:31] will retry after 199.178216ms: waiting for domain to come up
	I0120 14:01:49.092132 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.092672 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.092706 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:49.092636 1971420 retry.go:31] will retry after 255.633273ms: waiting for domain to come up
	I0120 14:01:49.350430 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.351160 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.351194 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:49.351128 1971420 retry.go:31] will retry after 428.048868ms: waiting for domain to come up
	I0120 14:01:49.781110 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.781882 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:49.781964 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:49.781864 1971420 retry.go:31] will retry after 580.304151ms: waiting for domain to come up
	I0120 14:01:50.363965 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:50.364533 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:50.364559 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:50.364529 1971420 retry.go:31] will retry after 531.332191ms: waiting for domain to come up
	I0120 14:01:49.032269 1971155 main.go:141] libmachine: (old-k8s-version-191446) Calling .GetIP
	I0120 14:01:49.035945 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:49.036382 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:83:fb", ip: ""} in network mk-old-k8s-version-191446: {Iface:virbr3 ExpiryTime:2025-01-20 15:01:39 +0000 UTC Type:0 Mac:52:54:00:87:83:fb Iaid: IPaddr:192.168.61.215 Prefix:24 Hostname:old-k8s-version-191446 Clientid:01:52:54:00:87:83:fb}
	I0120 14:01:49.036423 1971155 main.go:141] libmachine: (old-k8s-version-191446) DBG | domain old-k8s-version-191446 has defined IP address 192.168.61.215 and MAC address 52:54:00:87:83:fb in network mk-old-k8s-version-191446
	I0120 14:01:49.036733 1971155 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0120 14:01:49.041470 1971155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:01:49.055442 1971155 kubeadm.go:883] updating cluster {Name:old-k8s-version-191446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 14:01:49.055654 1971155 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 14:01:49.055738 1971155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:01:49.111537 1971155 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 14:01:49.111603 1971155 ssh_runner.go:195] Run: which lz4
	I0120 14:01:49.116646 1971155 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 14:01:49.121632 1971155 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 14:01:49.121670 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0120 14:01:51.019564 1971155 crio.go:462] duration metric: took 1.902969728s to copy over tarball
	I0120 14:01:51.019668 1971155 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 14:01:49.280435 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:51.780700 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:51.189623 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:53.687386 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:50.897267 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:50.897845 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:50.897880 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:50.897808 1971420 retry.go:31] will retry after 772.118387ms: waiting for domain to come up
	I0120 14:01:51.671806 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:51.672432 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:51.672466 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:51.672381 1971420 retry.go:31] will retry after 1.060623833s: waiting for domain to come up
	I0120 14:01:52.735398 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:52.735986 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:52.736018 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:52.735943 1971420 retry.go:31] will retry after 1.002731806s: waiting for domain to come up
	I0120 14:01:53.740048 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:53.740702 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:53.740731 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:53.740659 1971420 retry.go:31] will retry after 1.680491712s: waiting for domain to come up
	I0120 14:01:55.423577 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:55.424100 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:55.424135 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:55.424031 1971420 retry.go:31] will retry after 1.794880075s: waiting for domain to come up
	I0120 14:01:54.192207 1971155 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.172482213s)
	I0120 14:01:54.192247 1971155 crio.go:469] duration metric: took 3.172642787s to extract the tarball
	I0120 14:01:54.192257 1971155 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 14:01:54.241548 1971155 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:01:54.283118 1971155 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0120 14:01:54.283147 1971155 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0120 14:01:54.283222 1971155 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:01:54.283246 1971155 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.283292 1971155 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.283311 1971155 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.283369 1971155 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0120 14:01:54.283369 1971155 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.283370 1971155 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.283429 1971155 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.285174 1971155 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:01:54.285194 1971155 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.285222 1971155 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.285232 1971155 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.285484 1971155 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.285533 1971155 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.285551 1971155 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0120 14:01:54.285520 1971155 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.443508 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.451962 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.459320 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.478139 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.482365 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.490130 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.491742 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0120 14:01:54.535842 1971155 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0120 14:01:54.535930 1971155 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.536008 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.556510 1971155 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0120 14:01:54.556563 1971155 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.556617 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.604701 1971155 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0120 14:01:54.604747 1971155 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.604801 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.648817 1971155 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0120 14:01:54.648847 1971155 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0120 14:01:54.648872 1971155 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.648887 1971155 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.648932 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.648951 1971155 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0120 14:01:54.648986 1971155 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.649059 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.648932 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.662210 1971155 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0120 14:01:54.662265 1971155 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0120 14:01:54.662271 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.662303 1971155 ssh_runner.go:195] Run: which crictl
	I0120 14:01:54.662304 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.662392 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.662403 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.666373 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.666427 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:54.784739 1971155 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:01:54.815550 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:54.815585 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 14:01:54.815637 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:54.815650 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 14:01:54.820367 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:54.820421 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:54.820459 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:55.000111 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0120 14:01:55.000218 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0120 14:01:55.013244 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0120 14:01:55.013276 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 14:01:55.013348 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0120 14:01:55.013372 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0120 14:01:55.015126 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0120 14:01:55.144073 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0120 14:01:55.144169 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0120 14:01:55.175966 1971155 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0120 14:01:55.175984 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0120 14:01:55.179810 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0120 14:01:55.179835 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0120 14:01:55.180076 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0120 14:01:55.216565 1971155 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0120 14:01:55.216646 1971155 cache_images.go:92] duration metric: took 933.479899ms to LoadCachedImages
	W0120 14:01:55.216768 1971155 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0120 14:01:55.216789 1971155 kubeadm.go:934] updating node { 192.168.61.215 8443 v1.20.0 crio true true} ...
	I0120 14:01:55.216907 1971155 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-191446 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 14:01:55.216973 1971155 ssh_runner.go:195] Run: crio config
	I0120 14:01:55.272348 1971155 cni.go:84] Creating CNI manager for ""
	I0120 14:01:55.272377 1971155 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:01:55.272387 1971155 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 14:01:55.272407 1971155 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.215 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-191446 NodeName:old-k8s-version-191446 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0120 14:01:55.272581 1971155 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-191446"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 14:01:55.272661 1971155 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0120 14:01:55.285452 1971155 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 14:01:55.285532 1971155 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 14:01:55.300604 1971155 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0120 14:01:55.321434 1971155 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 14:01:55.339855 1971155 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0120 14:01:55.360605 1971155 ssh_runner.go:195] Run: grep 192.168.61.215	control-plane.minikube.internal$ /etc/hosts
	I0120 14:01:55.364977 1971155 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:01:55.380053 1971155 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:01:55.499744 1971155 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:01:55.518232 1971155 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446 for IP: 192.168.61.215
	I0120 14:01:55.518267 1971155 certs.go:194] generating shared ca certs ...
	I0120 14:01:55.518300 1971155 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:01:55.518512 1971155 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 14:01:55.518553 1971155 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 14:01:55.518563 1971155 certs.go:256] generating profile certs ...
	I0120 14:01:55.571153 1971155 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.key
	I0120 14:01:55.571288 1971155 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.key.d5f4b946
	I0120 14:01:55.571350 1971155 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.key
	I0120 14:01:55.571517 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem (1338 bytes)
	W0120 14:01:55.571559 1971155 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672_empty.pem, impossibly tiny 0 bytes
	I0120 14:01:55.571570 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 14:01:55.571606 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 14:01:55.571641 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 14:01:55.571671 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 14:01:55.571733 1971155 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:01:55.572624 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 14:01:55.613349 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 14:01:55.645837 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 14:01:55.688637 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 14:01:55.736949 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0120 14:01:55.786459 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0120 14:01:55.833912 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 14:01:55.861615 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 14:01:55.891303 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem --> /usr/share/ca-certificates/1927672.pem (1338 bytes)
	I0120 14:01:55.920272 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /usr/share/ca-certificates/19276722.pem (1708 bytes)
	I0120 14:01:55.947553 1971155 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 14:01:55.979159 1971155 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 14:01:56.002476 1971155 ssh_runner.go:195] Run: openssl version
	I0120 14:01:56.011075 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19276722.pem && ln -fs /usr/share/ca-certificates/19276722.pem /etc/ssl/certs/19276722.pem"
	I0120 14:01:56.026823 1971155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19276722.pem
	I0120 14:01:56.033320 1971155 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:58 /usr/share/ca-certificates/19276722.pem
	I0120 14:01:56.033404 1971155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19276722.pem
	I0120 14:01:56.041787 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19276722.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 14:01:56.055968 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 14:01:56.072918 1971155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:01:56.078642 1971155 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:01:56.078744 1971155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:01:56.085416 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 14:01:56.101948 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1927672.pem && ln -fs /usr/share/ca-certificates/1927672.pem /etc/ssl/certs/1927672.pem"
	I0120 14:01:56.117742 1971155 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1927672.pem
	I0120 14:01:56.123020 1971155 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:58 /usr/share/ca-certificates/1927672.pem
	I0120 14:01:56.123086 1971155 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1927672.pem
	I0120 14:01:56.129661 1971155 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1927672.pem /etc/ssl/certs/51391683.0"
	I0120 14:01:56.142113 1971155 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 14:01:56.147841 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 14:01:56.154627 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 14:01:56.161139 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 14:01:56.167754 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 14:01:56.174520 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 14:01:56.181204 1971155 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 14:01:56.187656 1971155 kubeadm.go:392] StartCluster: {Name:old-k8s-version-191446 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-191446 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:01:56.187767 1971155 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 14:01:56.187860 1971155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:01:56.233626 1971155 cri.go:89] found id: ""
	I0120 14:01:56.233718 1971155 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 14:01:56.245027 1971155 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 14:01:56.245062 1971155 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 14:01:56.245126 1971155 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 14:01:56.258403 1971155 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 14:01:56.259211 1971155 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-191446" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:01:56.259525 1971155 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-1920423/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-191446" cluster setting kubeconfig missing "old-k8s-version-191446" context setting]
	I0120 14:01:56.260060 1971155 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:01:56.288258 1971155 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 14:01:56.302812 1971155 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.215
	I0120 14:01:56.302855 1971155 kubeadm.go:1160] stopping kube-system containers ...
	I0120 14:01:56.302872 1971155 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 14:01:56.302942 1971155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:01:56.343694 1971155 cri.go:89] found id: ""
	I0120 14:01:56.343794 1971155 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 14:01:56.364228 1971155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:01:56.375163 1971155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:01:56.375187 1971155 kubeadm.go:157] found existing configuration files:
	
	I0120 14:01:56.375260 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:01:56.386527 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:01:56.386622 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:01:56.398715 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:01:56.410031 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:01:56.410112 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:01:56.420983 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:01:56.433109 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:01:56.433192 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:01:56.447385 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:01:56.460977 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:01:56.461066 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:01:56.472124 1971155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:01:56.484344 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:56.617563 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:57.344622 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:54.280536 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:56.779010 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:58.779726 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:55.714950 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:58.186438 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:01:57.220139 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:57.220694 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:57.220723 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:57.220656 1971420 retry.go:31] will retry after 2.261913004s: waiting for domain to come up
	I0120 14:01:59.484214 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:01:59.484791 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:01:59.484820 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:01:59.484718 1971420 retry.go:31] will retry after 2.630282337s: waiting for domain to come up
	I0120 14:01:57.621080 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:57.732306 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:01:57.856823 1971155 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:01:57.856931 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:01:58.357005 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:01:58.857625 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:01:59.358085 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:01:59.857398 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:00.357930 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:00.857134 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:01.357106 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:01.857163 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:02.357462 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:01.278692 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:03.777558 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:00.689940 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:03.185114 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:02.116624 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:02.117129 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:02:02.117163 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:02:02.117089 1971420 retry.go:31] will retry after 3.120909651s: waiting for domain to come up
	I0120 14:02:05.239389 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:05.239901 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | unable to find current IP address of domain default-k8s-diff-port-727256 in network mk-default-k8s-diff-port-727256
	I0120 14:02:05.239953 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | I0120 14:02:05.239877 1971420 retry.go:31] will retry after 4.391800801s: waiting for domain to come up
	I0120 14:02:02.857734 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:03.357569 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:03.857955 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:04.357274 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:04.857819 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:05.357138 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:05.857025 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:06.357050 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:06.857399 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:07.357029 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:05.777988 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:08.278483 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:05.188225 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:07.685349 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:10.186075 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:09.634193 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.634637 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has current primary IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.634659 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) found domain IP: 192.168.72.104
	I0120 14:02:09.634684 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) reserving static IP address...
	I0120 14:02:09.635059 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) reserved static IP address 192.168.72.104 for domain default-k8s-diff-port-727256
	I0120 14:02:09.635098 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-727256", mac: "52:54:00:59:90:f7", ip: "192.168.72.104"} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.635109 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) waiting for SSH...
	I0120 14:02:09.635133 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | skip adding static IP to network mk-default-k8s-diff-port-727256 - found existing host DHCP lease matching {name: "default-k8s-diff-port-727256", mac: "52:54:00:59:90:f7", ip: "192.168.72.104"}
	I0120 14:02:09.635148 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Getting to WaitForSSH function...
	I0120 14:02:09.637199 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.637520 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.637554 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.637664 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Using SSH client type: external
	I0120 14:02:09.637695 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Using SSH private key: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa (-rw-------)
	I0120 14:02:09.637761 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0120 14:02:09.637785 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | About to run SSH command:
	I0120 14:02:09.637834 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | exit 0
	I0120 14:02:09.763002 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | SSH cmd err, output: <nil>: 
	I0120 14:02:09.763410 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetConfigRaw
	I0120 14:02:09.764140 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetIP
	I0120 14:02:09.766862 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.767255 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.767309 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.767547 1971324 profile.go:143] Saving config to /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/config.json ...
	I0120 14:02:09.767747 1971324 machine.go:93] provisionDockerMachine start ...
	I0120 14:02:09.767768 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:09.768084 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:09.770642 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.770978 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.771008 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.771159 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:09.771355 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:09.771522 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:09.771651 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:09.771843 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:09.772116 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:09.772135 1971324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0120 14:02:09.887277 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0120 14:02:09.887306 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetMachineName
	I0120 14:02:09.887607 1971324 buildroot.go:166] provisioning hostname "default-k8s-diff-port-727256"
	I0120 14:02:09.887644 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetMachineName
	I0120 14:02:09.887855 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:09.890533 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.890940 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:09.890972 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:09.891158 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:09.891363 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:09.891514 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:09.891625 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:09.891766 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:09.891982 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:09.891996 1971324 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-727256 && echo "default-k8s-diff-port-727256" | sudo tee /etc/hostname
	I0120 14:02:10.015326 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-727256
	
	I0120 14:02:10.015358 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:10.018488 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.018889 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.018920 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.019174 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:10.019397 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.019591 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.019775 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:10.019935 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:10.020121 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:10.020141 1971324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-727256' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-727256/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-727256' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0120 14:02:10.136552 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0120 14:02:10.136593 1971324 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20242-1920423/.minikube CaCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20242-1920423/.minikube}
	I0120 14:02:10.136631 1971324 buildroot.go:174] setting up certificates
	I0120 14:02:10.136653 1971324 provision.go:84] configureAuth start
	I0120 14:02:10.136667 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetMachineName
	I0120 14:02:10.137020 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetIP
	I0120 14:02:10.140046 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.140577 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.140627 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.140766 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:10.143806 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.144185 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.144220 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.144340 1971324 provision.go:143] copyHostCerts
	I0120 14:02:10.144408 1971324 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem, removing ...
	I0120 14:02:10.144433 1971324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem
	I0120 14:02:10.144518 1971324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/cert.pem (1123 bytes)
	I0120 14:02:10.144663 1971324 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem, removing ...
	I0120 14:02:10.144675 1971324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem
	I0120 14:02:10.144716 1971324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/key.pem (1679 bytes)
	I0120 14:02:10.144827 1971324 exec_runner.go:144] found /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem, removing ...
	I0120 14:02:10.144838 1971324 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem
	I0120 14:02:10.144865 1971324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.pem (1082 bytes)
	I0120 14:02:10.144958 1971324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-727256 san=[127.0.0.1 192.168.72.104 default-k8s-diff-port-727256 localhost minikube]
	I0120 14:02:07.857904 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:08.357419 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:08.857241 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:09.357914 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:09.857010 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:10.357118 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:10.857037 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:11.357243 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:11.857017 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:12.357401 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:10.704568 1971324 provision.go:177] copyRemoteCerts
	I0120 14:02:10.704642 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0120 14:02:10.704670 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:10.707581 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.707968 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.708005 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.708165 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:10.708406 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.708556 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:10.708705 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:02:10.798392 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0120 14:02:10.825489 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0120 14:02:10.851203 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0120 14:02:10.877144 1971324 provision.go:87] duration metric: took 740.469356ms to configureAuth
	I0120 14:02:10.877184 1971324 buildroot.go:189] setting minikube options for container-runtime
	I0120 14:02:10.877372 1971324 config.go:182] Loaded profile config "default-k8s-diff-port-727256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:02:10.877454 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:10.880681 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.881100 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:10.881135 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:10.881295 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:10.881487 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.881661 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:10.881824 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:10.881986 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:10.882152 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:10.882168 1971324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0120 14:02:11.118214 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0120 14:02:11.118246 1971324 machine.go:96] duration metric: took 1.350483814s to provisionDockerMachine
	I0120 14:02:11.118262 1971324 start.go:293] postStartSetup for "default-k8s-diff-port-727256" (driver="kvm2")
	I0120 14:02:11.118274 1971324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0120 14:02:11.118291 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.118662 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0120 14:02:11.118706 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:11.121765 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.122132 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.122160 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.122325 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:11.122539 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.122849 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:11.123019 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:02:11.205783 1971324 ssh_runner.go:195] Run: cat /etc/os-release
	I0120 14:02:11.211240 1971324 info.go:137] Remote host: Buildroot 2023.02.9
	I0120 14:02:11.211282 1971324 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/addons for local assets ...
	I0120 14:02:11.211389 1971324 filesync.go:126] Scanning /home/jenkins/minikube-integration/20242-1920423/.minikube/files for local assets ...
	I0120 14:02:11.211524 1971324 filesync.go:149] local asset: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem -> 19276722.pem in /etc/ssl/certs
	I0120 14:02:11.211679 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0120 14:02:11.222226 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:02:11.248964 1971324 start.go:296] duration metric: took 130.683064ms for postStartSetup
	I0120 14:02:11.249013 1971324 fix.go:56] duration metric: took 23.768701383s for fixHost
	I0120 14:02:11.249043 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:11.252350 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.252735 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.252784 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.253016 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:11.253244 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.253451 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.253587 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:11.253769 1971324 main.go:141] libmachine: Using SSH client type: native
	I0120 14:02:11.254003 1971324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil>  [] 0s} 192.168.72.104 22 <nil> <nil>}
	I0120 14:02:11.254018 1971324 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0120 14:02:11.360027 1971324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737381731.321642168
	
	I0120 14:02:11.360058 1971324 fix.go:216] guest clock: 1737381731.321642168
	I0120 14:02:11.360067 1971324 fix.go:229] Guest: 2025-01-20 14:02:11.321642168 +0000 UTC Remote: 2025-01-20 14:02:11.249019145 +0000 UTC m=+40.644950772 (delta=72.623023ms)
	I0120 14:02:11.360095 1971324 fix.go:200] guest clock delta is within tolerance: 72.623023ms
	I0120 14:02:11.360110 1971324 start.go:83] releasing machines lock for "default-k8s-diff-port-727256", held for 23.8798308s
	I0120 14:02:11.360147 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.360474 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetIP
	I0120 14:02:11.363630 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.364131 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.364160 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.364441 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.365063 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.365248 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:02:11.365348 1971324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0120 14:02:11.365404 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:11.365419 1971324 ssh_runner.go:195] Run: cat /version.json
	I0120 14:02:11.365439 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:02:11.368411 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.368839 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.368879 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.368903 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.369109 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:11.369341 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.369383 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:11.369421 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:11.369557 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:11.369661 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:02:11.369746 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:02:11.369900 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:02:11.370094 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:02:11.370254 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:02:11.448584 1971324 ssh_runner.go:195] Run: systemctl --version
	I0120 14:02:11.476726 1971324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0120 14:02:11.630047 1971324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0120 14:02:11.636964 1971324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0120 14:02:11.637055 1971324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0120 14:02:11.654243 1971324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0120 14:02:11.654288 1971324 start.go:495] detecting cgroup driver to use...
	I0120 14:02:11.654363 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0120 14:02:11.671320 1971324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0120 14:02:11.687866 1971324 docker.go:217] disabling cri-docker service (if available) ...
	I0120 14:02:11.687931 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0120 14:02:11.703932 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0120 14:02:11.718827 1971324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0120 14:02:11.847210 1971324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0120 14:02:12.007623 1971324 docker.go:233] disabling docker service ...
	I0120 14:02:12.007698 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0120 14:02:12.024946 1971324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0120 14:02:12.039357 1971324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0120 14:02:12.198785 1971324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0120 14:02:12.318653 1971324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0120 14:02:12.335226 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0120 14:02:12.356118 1971324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0120 14:02:12.356185 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.368853 1971324 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0120 14:02:12.368928 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.382590 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.395155 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.407707 1971324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0120 14:02:12.420260 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.432650 1971324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.451911 1971324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0120 14:02:12.463708 1971324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0120 14:02:12.474047 1971324 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0120 14:02:12.474171 1971324 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0120 14:02:12.487873 1971324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0120 14:02:12.498585 1971324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:02:12.613685 1971324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0120 14:02:12.729768 1971324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0120 14:02:12.729875 1971324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0120 14:02:12.734978 1971324 start.go:563] Will wait 60s for crictl version
	I0120 14:02:12.735064 1971324 ssh_runner.go:195] Run: which crictl
	I0120 14:02:12.739280 1971324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0120 14:02:12.786678 1971324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0120 14:02:12.786793 1971324 ssh_runner.go:195] Run: crio --version
	I0120 14:02:12.817307 1971324 ssh_runner.go:195] Run: crio --version
	I0120 14:02:12.852593 1971324 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0120 14:02:10.778869 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:12.782521 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:12.186380 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:14.187082 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:12.853765 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetIP
	I0120 14:02:12.856623 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:12.857007 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:02:12.857053 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:02:12.857241 1971324 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0120 14:02:12.861728 1971324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:02:12.877000 1971324 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-727256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-727
256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0120 14:02:12.877127 1971324 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0120 14:02:12.877169 1971324 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:02:12.929986 1971324 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0120 14:02:12.930071 1971324 ssh_runner.go:195] Run: which lz4
	I0120 14:02:12.934799 1971324 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0120 14:02:12.939259 1971324 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0120 14:02:12.939291 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0120 14:02:15.168447 1971324 crio.go:462] duration metric: took 2.233676027s to copy over tarball
	I0120 14:02:15.168587 1971324 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0120 14:02:12.857737 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:13.358043 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:13.857191 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:14.357168 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:14.857760 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:15.357900 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:15.857889 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:16.357039 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:16.857812 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:17.358144 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:15.279029 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:17.281259 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:16.687293 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:18.717798 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:17.552550 1971324 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.383920665s)
	I0120 14:02:17.552588 1971324 crio.go:469] duration metric: took 2.38410161s to extract the tarball
	I0120 14:02:17.552598 1971324 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0120 14:02:17.595819 1971324 ssh_runner.go:195] Run: sudo crictl images --output json
	I0120 14:02:17.649094 1971324 crio.go:514] all images are preloaded for cri-o runtime.
	I0120 14:02:17.649124 1971324 cache_images.go:84] Images are preloaded, skipping loading
	I0120 14:02:17.649135 1971324 kubeadm.go:934] updating node { 192.168.72.104 8444 v1.32.0 crio true true} ...
	I0120 14:02:17.649302 1971324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-727256 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-727256 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0120 14:02:17.649381 1971324 ssh_runner.go:195] Run: crio config
	I0120 14:02:17.704561 1971324 cni.go:84] Creating CNI manager for ""
	I0120 14:02:17.704586 1971324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:02:17.704598 1971324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0120 14:02:17.704619 1971324 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.104 APIServerPort:8444 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-727256 NodeName:default-k8s-diff-port-727256 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0120 14:02:17.704750 1971324 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.104
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-727256"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.104"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.104"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0120 14:02:17.704816 1971324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0120 14:02:17.716061 1971324 binaries.go:44] Found k8s binaries, skipping transfer
	I0120 14:02:17.716155 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0120 14:02:17.727801 1971324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0120 14:02:17.748166 1971324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0120 14:02:17.766985 1971324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I0120 14:02:17.787650 1971324 ssh_runner.go:195] Run: grep 192.168.72.104	control-plane.minikube.internal$ /etc/hosts
	I0120 14:02:17.791993 1971324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0120 14:02:17.808216 1971324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:02:17.961542 1971324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:02:17.984203 1971324 certs.go:68] Setting up /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256 for IP: 192.168.72.104
	I0120 14:02:17.984233 1971324 certs.go:194] generating shared ca certs ...
	I0120 14:02:17.984291 1971324 certs.go:226] acquiring lock for ca certs: {Name:mk6a9e370987a20273940cc7fb21abf350c4ac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:02:17.984557 1971324 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key
	I0120 14:02:17.984648 1971324 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key
	I0120 14:02:17.984666 1971324 certs.go:256] generating profile certs ...
	I0120 14:02:17.984792 1971324 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.key
	I0120 14:02:17.984852 1971324 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/apiserver.key.23647750
	I0120 14:02:17.984912 1971324 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/proxy-client.key
	I0120 14:02:17.985077 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem (1338 bytes)
	W0120 14:02:17.985121 1971324 certs.go:480] ignoring /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672_empty.pem, impossibly tiny 0 bytes
	I0120 14:02:17.985133 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca-key.pem (1675 bytes)
	I0120 14:02:17.985155 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/ca.pem (1082 bytes)
	I0120 14:02:17.985178 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/cert.pem (1123 bytes)
	I0120 14:02:17.985198 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/key.pem (1679 bytes)
	I0120 14:02:17.985256 1971324 certs.go:484] found cert: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem (1708 bytes)
	I0120 14:02:17.985878 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0120 14:02:18.048719 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0120 14:02:18.112171 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0120 14:02:18.145094 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0120 14:02:18.177563 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0120 14:02:18.207741 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0120 14:02:18.238193 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0120 14:02:18.267493 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0120 14:02:18.299204 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0120 14:02:18.326722 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/certs/1927672.pem --> /usr/share/ca-certificates/1927672.pem (1338 bytes)
	I0120 14:02:18.354365 1971324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/ssl/certs/19276722.pem --> /usr/share/ca-certificates/19276722.pem (1708 bytes)
	I0120 14:02:18.387004 1971324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0120 14:02:18.407331 1971324 ssh_runner.go:195] Run: openssl version
	I0120 14:02:18.414499 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0120 14:02:18.428237 1971324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:02:18.433437 1971324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 12:50 /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:02:18.433525 1971324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0120 14:02:18.440279 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0120 14:02:18.453372 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1927672.pem && ln -fs /usr/share/ca-certificates/1927672.pem /etc/ssl/certs/1927672.pem"
	I0120 14:02:18.466685 1971324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1927672.pem
	I0120 14:02:18.472158 1971324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 20 12:58 /usr/share/ca-certificates/1927672.pem
	I0120 14:02:18.472221 1971324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1927672.pem
	I0120 14:02:18.479048 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1927672.pem /etc/ssl/certs/51391683.0"
	I0120 14:02:18.492239 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19276722.pem && ln -fs /usr/share/ca-certificates/19276722.pem /etc/ssl/certs/19276722.pem"
	I0120 14:02:18.505538 1971324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19276722.pem
	I0120 14:02:18.511360 1971324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 20 12:58 /usr/share/ca-certificates/19276722.pem
	I0120 14:02:18.511449 1971324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19276722.pem
	I0120 14:02:18.518290 1971324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19276722.pem /etc/ssl/certs/3ec20f2e.0"
	I0120 14:02:18.531250 1971324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0120 14:02:18.536241 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0120 14:02:18.543115 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0120 14:02:18.549735 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0120 14:02:18.556016 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0120 14:02:18.563051 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0120 14:02:18.569460 1971324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0120 14:02:18.576252 1971324 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-727256 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-727256
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 14:02:18.576356 1971324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0120 14:02:18.576422 1971324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:02:18.620494 1971324 cri.go:89] found id: ""
	I0120 14:02:18.620569 1971324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0120 14:02:18.631697 1971324 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0120 14:02:18.631720 1971324 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0120 14:02:18.631768 1971324 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0120 14:02:18.642156 1971324 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0120 14:02:18.643051 1971324 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-727256" does not appear in /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:02:18.643528 1971324 kubeconfig.go:62] /home/jenkins/minikube-integration/20242-1920423/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-727256" cluster setting kubeconfig missing "default-k8s-diff-port-727256" context setting]
	I0120 14:02:18.644170 1971324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:02:18.668914 1971324 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0120 14:02:18.683072 1971324 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.104
	I0120 14:02:18.683114 1971324 kubeadm.go:1160] stopping kube-system containers ...
	I0120 14:02:18.683129 1971324 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0120 14:02:18.683183 1971324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0120 14:02:18.729285 1971324 cri.go:89] found id: ""
	I0120 14:02:18.729378 1971324 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0120 14:02:18.747615 1971324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:02:18.760814 1971324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:02:18.760838 1971324 kubeadm.go:157] found existing configuration files:
	
	I0120 14:02:18.760894 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0120 14:02:18.770641 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:02:18.770724 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:02:18.781179 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0120 14:02:18.792949 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:02:18.793028 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:02:18.804366 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0120 14:02:18.815263 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:02:18.815346 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:02:18.825942 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0120 14:02:18.835903 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:02:18.835982 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:02:18.845972 1971324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:02:18.859961 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:19.003738 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:19.608160 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:19.849647 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:19.912750 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:20.009660 1971324 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:02:20.009754 1971324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:20.510534 1971324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:17.857538 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:18.357133 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:18.857266 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:19.357682 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:19.857168 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:20.357018 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:20.857784 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:21.357312 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:21.857374 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:22.357052 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:19.469918 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:21.779262 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:21.010159 1971324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:21.032056 1971324 api_server.go:72] duration metric: took 1.022395241s to wait for apiserver process to appear ...
	I0120 14:02:21.032096 1971324 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:02:21.032131 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:21.032697 1971324 api_server.go:269] stopped: https://192.168.72.104:8444/healthz: Get "https://192.168.72.104:8444/healthz": dial tcp 192.168.72.104:8444: connect: connection refused
	I0120 14:02:21.532363 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:23.847330 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 14:02:23.847369 1971324 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 14:02:23.847385 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:23.877401 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0120 14:02:23.877441 1971324 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0120 14:02:24.032826 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:24.039566 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:02:24.039598 1971324 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:02:24.532837 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:24.539028 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0120 14:02:24.539067 1971324 api_server.go:103] status: https://192.168.72.104:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0120 14:02:25.032465 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:02:25.039986 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0120 14:02:25.049377 1971324 api_server.go:141] control plane version: v1.32.0
	I0120 14:02:25.049420 1971324 api_server.go:131] duration metric: took 4.017316014s to wait for apiserver health ...
	I0120 14:02:25.049433 1971324 cni.go:84] Creating CNI manager for ""
	I0120 14:02:25.049442 1971324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:02:25.051482 1971324 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:02:21.185126 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:23.186698 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:25.052855 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:02:25.066022 1971324 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:02:25.095180 1971324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:02:25.114905 1971324 system_pods.go:59] 8 kube-system pods found
	I0120 14:02:25.114960 1971324 system_pods.go:61] "coredns-668d6bf9bc-bz5qj" [d7374913-ed7c-42dc-a94f-44e1e2c757a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0120 14:02:25.114976 1971324 system_pods.go:61] "etcd-default-k8s-diff-port-727256" [1b7d5ec9-7630-4785-9c45-41ecdb748a8a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0120 14:02:25.114986 1971324 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-727256" [41957bec-6146-4451-a58e-80cfc0954d72] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0120 14:02:25.115001 1971324 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-727256" [700634af-068c-43a9-93fd-cb10680f5547] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0120 14:02:25.115015 1971324 system_pods.go:61] "kube-proxy-q48xh" [714b43b5-29d9-4ffb-a571-d319ac71ea64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0120 14:02:25.115023 1971324 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-727256" [37e3619f-2d6c-4ffd-a8a2-e9e935b79342] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0120 14:02:25.115037 1971324 system_pods.go:61] "metrics-server-f79f97bbb-wgptn" [c1255c51-78a3-4f21-a054-b7eec52e8021] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:02:25.115045 1971324 system_pods.go:61] "storage-provisioner" [f116e0d4-4c99-46b2-bb50-448d19e948da] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0120 14:02:25.115063 1971324 system_pods.go:74] duration metric: took 19.845736ms to wait for pod list to return data ...
	I0120 14:02:25.115078 1971324 node_conditions.go:102] verifying NodePressure condition ...
	I0120 14:02:25.140084 1971324 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0120 14:02:25.140127 1971324 node_conditions.go:123] node cpu capacity is 2
	I0120 14:02:25.140143 1971324 node_conditions.go:105] duration metric: took 25.059269ms to run NodePressure ...
	I0120 14:02:25.140170 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0120 14:02:25.471605 1971324 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0120 14:02:25.475871 1971324 kubeadm.go:739] kubelet initialised
	I0120 14:02:25.475897 1971324 kubeadm.go:740] duration metric: took 4.262299ms waiting for restarted kubelet to initialise ...
	I0120 14:02:25.475907 1971324 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:02:25.481730 1971324 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:22.857953 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:23.357118 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:23.857846 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:24.357974 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:24.858083 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:25.357532 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:25.857724 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:26.357640 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:26.857695 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:27.357848 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:24.279782 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:26.777640 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:28.778330 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:25.686765 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:28.186774 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:27.488205 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:29.990080 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:27.857637 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:28.357980 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:28.857073 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:29.357768 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:29.857689 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:30.358021 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:30.857725 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:31.357087 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:31.857093 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:32.358124 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:30.783033 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:33.279302 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:30.685246 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:33.195660 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:31.992749 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:34.489038 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:32.857233 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:33.357972 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:33.857268 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:34.357580 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:34.857317 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:35.357391 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:35.858044 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:36.357666 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:36.857501 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:37.357800 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:35.282839 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:37.778057 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:35.685341 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:37.685683 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:40.185648 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:35.989736 1971324 pod_ready.go:93] pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:35.989764 1971324 pod_ready.go:82] duration metric: took 10.507995257s for pod "coredns-668d6bf9bc-bz5qj" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:35.989775 1971324 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:35.994950 1971324 pod_ready.go:93] pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:35.994974 1971324 pod_ready.go:82] duration metric: took 5.193222ms for pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:35.994984 1971324 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:38.002261 1971324 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:39.002130 1971324 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:39.002163 1971324 pod_ready.go:82] duration metric: took 3.007172332s for pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.002175 1971324 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.007066 1971324 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:39.007092 1971324 pod_ready.go:82] duration metric: took 4.909894ms for pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.007102 1971324 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-q48xh" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.011300 1971324 pod_ready.go:93] pod "kube-proxy-q48xh" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:39.011327 1971324 pod_ready.go:82] duration metric: took 4.217903ms for pod "kube-proxy-q48xh" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.011339 1971324 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.019267 1971324 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:02:39.019290 1971324 pod_ready.go:82] duration metric: took 7.94282ms for pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:39.019299 1971324 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace to be "Ready" ...
	I0120 14:02:37.857302 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:38.357923 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:38.857475 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:39.357375 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:39.857802 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:40.357852 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:40.857000 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:41.357100 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:41.857256 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:42.357310 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:39.778127 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:41.778931 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:42.185876 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:44.685996 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:41.026382 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:43.026822 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:45.526641 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:42.857156 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:43.357487 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:43.857399 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:44.357134 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:44.857807 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:45.358043 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:45.857787 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:46.357476 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:46.857480 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:47.357059 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:44.284374 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:46.778063 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:46.686210 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:49.185352 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:48.025036 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:50.027377 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:47.857389 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:48.357917 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:48.857908 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:49.357865 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:49.857103 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:50.357844 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:50.856981 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:51.357722 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:51.857389 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:52.357276 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:49.277771 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:51.280318 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:53.778876 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:51.685546 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:53.685814 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:52.526770 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:55.026492 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:52.857418 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:53.357813 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:53.857620 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:54.357209 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:54.857914 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:55.357510 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:55.857571 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:56.357067 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:56.857492 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:57.357062 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:02:55.783020 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:58.280672 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:55.686206 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:58.186818 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:57.026925 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:59.525553 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:02:57.857477 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:02:57.857614 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:02:57.905881 1971155 cri.go:89] found id: ""
	I0120 14:02:57.905912 1971155 logs.go:282] 0 containers: []
	W0120 14:02:57.905922 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:02:57.905929 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:02:57.905992 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:02:57.943622 1971155 cri.go:89] found id: ""
	I0120 14:02:57.943651 1971155 logs.go:282] 0 containers: []
	W0120 14:02:57.943661 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:02:57.943667 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:02:57.943723 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:02:57.988526 1971155 cri.go:89] found id: ""
	I0120 14:02:57.988562 1971155 logs.go:282] 0 containers: []
	W0120 14:02:57.988574 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:02:57.988583 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:02:57.988651 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:02:58.031485 1971155 cri.go:89] found id: ""
	I0120 14:02:58.031521 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.031534 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:02:58.031543 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:02:58.031610 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:02:58.068567 1971155 cri.go:89] found id: ""
	I0120 14:02:58.068598 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.068607 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:02:58.068613 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:02:58.068671 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:02:58.111132 1971155 cri.go:89] found id: ""
	I0120 14:02:58.111163 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.111172 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:02:58.111179 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:02:58.111249 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:02:58.148303 1971155 cri.go:89] found id: ""
	I0120 14:02:58.148347 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.148360 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:02:58.148369 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:02:58.148451 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:02:58.185950 1971155 cri.go:89] found id: ""
	I0120 14:02:58.185999 1971155 logs.go:282] 0 containers: []
	W0120 14:02:58.186012 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:02:58.186045 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:02:58.186067 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:02:58.240918 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:02:58.240967 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:02:58.257093 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:02:58.257146 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:02:58.414616 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:02:58.414647 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:02:58.414668 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:02:58.492488 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:02:58.492552 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:01.040468 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:01.055229 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:01.055334 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:01.096466 1971155 cri.go:89] found id: ""
	I0120 14:03:01.096504 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.096517 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:01.096527 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:01.096598 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:01.134935 1971155 cri.go:89] found id: ""
	I0120 14:03:01.134970 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.134981 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:01.134991 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:01.135067 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:01.173227 1971155 cri.go:89] found id: ""
	I0120 14:03:01.173260 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.173270 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:01.173276 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:01.173330 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:01.214239 1971155 cri.go:89] found id: ""
	I0120 14:03:01.214284 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.214295 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:01.214305 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:01.214371 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:01.256599 1971155 cri.go:89] found id: ""
	I0120 14:03:01.256637 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.256650 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:01.256659 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:01.256739 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:01.296996 1971155 cri.go:89] found id: ""
	I0120 14:03:01.297032 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.297061 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:01.297070 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:01.297138 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:01.332783 1971155 cri.go:89] found id: ""
	I0120 14:03:01.332823 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.332834 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:01.332843 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:01.332918 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:01.369365 1971155 cri.go:89] found id: ""
	I0120 14:03:01.369406 1971155 logs.go:282] 0 containers: []
	W0120 14:03:01.369421 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:01.369434 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:01.369451 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:01.414439 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:01.414480 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:01.471195 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:01.471246 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:01.486430 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:01.486462 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:01.574798 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:01.574828 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:01.574845 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:00.778133 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:02.778231 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:00.685031 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:03.185220 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:01.527499 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:04.025999 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:04.171235 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:04.188065 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:04.188156 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:04.228357 1971155 cri.go:89] found id: ""
	I0120 14:03:04.228387 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.228400 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:04.228409 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:04.228467 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:04.267565 1971155 cri.go:89] found id: ""
	I0120 14:03:04.267610 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.267624 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:04.267635 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:04.267711 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:04.307392 1971155 cri.go:89] found id: ""
	I0120 14:03:04.307425 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.307434 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:04.307440 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:04.307508 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:04.349729 1971155 cri.go:89] found id: ""
	I0120 14:03:04.349767 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.349778 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:04.349786 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:04.349870 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:04.387475 1971155 cri.go:89] found id: ""
	I0120 14:03:04.387501 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.387509 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:04.387516 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:04.387572 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:04.427468 1971155 cri.go:89] found id: ""
	I0120 14:03:04.427509 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.427530 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:04.427539 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:04.427612 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:04.466639 1971155 cri.go:89] found id: ""
	I0120 14:03:04.466670 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.466679 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:04.466686 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:04.466741 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:04.504757 1971155 cri.go:89] found id: ""
	I0120 14:03:04.504787 1971155 logs.go:282] 0 containers: []
	W0120 14:03:04.504795 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:04.504806 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:04.504818 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:04.557733 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:04.557779 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:04.573354 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:04.573387 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:04.650417 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:04.650446 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:04.650463 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:04.733072 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:04.733120 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:07.274982 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:07.290100 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:07.290193 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:07.332977 1971155 cri.go:89] found id: ""
	I0120 14:03:07.333017 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.333029 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:07.333038 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:07.333115 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:07.372892 1971155 cri.go:89] found id: ""
	I0120 14:03:07.372933 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.372945 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:07.372954 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:07.373026 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:07.425530 1971155 cri.go:89] found id: ""
	I0120 14:03:07.425577 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.425590 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:07.425600 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:07.425662 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:04.778368 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:06.778647 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:05.684845 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:07.685532 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:06.026498 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:08.526091 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:10.526915 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:07.476155 1971155 cri.go:89] found id: ""
	I0120 14:03:07.476184 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.476193 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:07.476199 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:07.476254 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:07.521877 1971155 cri.go:89] found id: ""
	I0120 14:03:07.521914 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.521926 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:07.521939 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:07.522011 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:07.560355 1971155 cri.go:89] found id: ""
	I0120 14:03:07.560395 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.560409 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:07.560418 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:07.560487 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:07.600264 1971155 cri.go:89] found id: ""
	I0120 14:03:07.600300 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.600312 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:07.600320 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:07.600394 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:07.638852 1971155 cri.go:89] found id: ""
	I0120 14:03:07.638882 1971155 logs.go:282] 0 containers: []
	W0120 14:03:07.638891 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:07.638904 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:07.638921 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:07.697341 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:07.697388 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:07.712419 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:07.712453 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:07.790196 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:07.790219 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:07.790236 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:07.865638 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:07.865691 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:10.411816 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:10.425923 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:10.425995 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:10.469227 1971155 cri.go:89] found id: ""
	I0120 14:03:10.469260 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.469271 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:10.469279 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:10.469335 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:10.507955 1971155 cri.go:89] found id: ""
	I0120 14:03:10.507982 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.507991 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:10.507997 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:10.508064 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:10.543101 1971155 cri.go:89] found id: ""
	I0120 14:03:10.543127 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.543135 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:10.543141 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:10.543211 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:10.585664 1971155 cri.go:89] found id: ""
	I0120 14:03:10.585707 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.585722 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:10.585731 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:10.585798 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:10.623476 1971155 cri.go:89] found id: ""
	I0120 14:03:10.623509 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.623519 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:10.623526 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:10.623696 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:10.660175 1971155 cri.go:89] found id: ""
	I0120 14:03:10.660212 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.660236 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:10.660243 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:10.660328 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:10.701559 1971155 cri.go:89] found id: ""
	I0120 14:03:10.701587 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.701595 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:10.701601 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:10.701660 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:10.745904 1971155 cri.go:89] found id: ""
	I0120 14:03:10.745934 1971155 logs.go:282] 0 containers: []
	W0120 14:03:10.745946 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:10.745960 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:10.745977 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:10.797159 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:10.797195 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:10.811080 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:10.811114 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:10.892397 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:10.892453 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:10.892474 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:10.974483 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:10.974548 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:09.277769 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:11.279861 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:13.778783 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:10.188443 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:12.684802 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:14.685044 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:13.026831 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:15.028964 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:13.520017 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:13.534970 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:13.535057 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:13.572408 1971155 cri.go:89] found id: ""
	I0120 14:03:13.572447 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.572460 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:13.572469 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:13.572551 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:13.611551 1971155 cri.go:89] found id: ""
	I0120 14:03:13.611584 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.611594 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:13.611602 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:13.611679 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:13.648597 1971155 cri.go:89] found id: ""
	I0120 14:03:13.648643 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.648659 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:13.648669 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:13.648746 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:13.688240 1971155 cri.go:89] found id: ""
	I0120 14:03:13.688273 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.688284 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:13.688292 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:13.688359 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:13.726824 1971155 cri.go:89] found id: ""
	I0120 14:03:13.726858 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.726870 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:13.726879 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:13.726960 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:13.763355 1971155 cri.go:89] found id: ""
	I0120 14:03:13.763393 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.763406 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:13.763426 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:13.763520 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:13.805672 1971155 cri.go:89] found id: ""
	I0120 14:03:13.805709 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.805721 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:13.805729 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:13.805808 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:13.843604 1971155 cri.go:89] found id: ""
	I0120 14:03:13.843639 1971155 logs.go:282] 0 containers: []
	W0120 14:03:13.843647 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:13.843658 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:13.843677 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:13.900719 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:13.900769 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:13.917734 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:13.917769 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:13.989979 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:13.990004 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:13.990023 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:14.065519 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:14.065568 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:16.608887 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:16.624966 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:16.625095 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:16.663250 1971155 cri.go:89] found id: ""
	I0120 14:03:16.663286 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.663299 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:16.663309 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:16.663381 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:16.705075 1971155 cri.go:89] found id: ""
	I0120 14:03:16.705109 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.705121 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:16.705129 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:16.705203 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:16.743136 1971155 cri.go:89] found id: ""
	I0120 14:03:16.743172 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.743183 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:16.743196 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:16.743259 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:16.781721 1971155 cri.go:89] found id: ""
	I0120 14:03:16.781749 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.781759 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:16.781768 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:16.781838 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:16.819156 1971155 cri.go:89] found id: ""
	I0120 14:03:16.819186 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.819195 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:16.819201 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:16.819267 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:16.857239 1971155 cri.go:89] found id: ""
	I0120 14:03:16.857271 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.857282 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:16.857291 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:16.857366 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:16.896447 1971155 cri.go:89] found id: ""
	I0120 14:03:16.896484 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.896494 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:16.896500 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:16.896573 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:16.933838 1971155 cri.go:89] found id: ""
	I0120 14:03:16.933868 1971155 logs.go:282] 0 containers: []
	W0120 14:03:16.933884 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:16.933895 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:16.933912 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:16.947603 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:16.947641 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:17.030769 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:17.030797 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:17.030817 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:17.113685 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:17.113733 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:17.156727 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:17.156762 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:16.279194 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:18.279451 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:16.686668 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:19.185833 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:17.525194 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:19.526034 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:19.718569 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:19.732512 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:19.732591 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:19.767932 1971155 cri.go:89] found id: ""
	I0120 14:03:19.767967 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.767978 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:19.767986 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:19.768060 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:19.803810 1971155 cri.go:89] found id: ""
	I0120 14:03:19.803849 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.803862 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:19.803870 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:19.803939 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:19.843834 1971155 cri.go:89] found id: ""
	I0120 14:03:19.843862 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.843873 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:19.843886 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:19.843958 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:19.881732 1971155 cri.go:89] found id: ""
	I0120 14:03:19.881763 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.881774 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:19.881781 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:19.881848 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:19.924381 1971155 cri.go:89] found id: ""
	I0120 14:03:19.924417 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.924428 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:19.924437 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:19.924502 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:19.970958 1971155 cri.go:89] found id: ""
	I0120 14:03:19.970987 1971155 logs.go:282] 0 containers: []
	W0120 14:03:19.970996 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:19.971004 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:19.971065 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:20.012745 1971155 cri.go:89] found id: ""
	I0120 14:03:20.012781 1971155 logs.go:282] 0 containers: []
	W0120 14:03:20.012792 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:20.012800 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:20.012874 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:20.051390 1971155 cri.go:89] found id: ""
	I0120 14:03:20.051440 1971155 logs.go:282] 0 containers: []
	W0120 14:03:20.051458 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:20.051472 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:20.051496 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:20.110400 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:20.110442 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:20.127460 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:20.127494 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:20.204395 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:20.204421 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:20.204438 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:20.285467 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:20.285512 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:20.281009 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:22.778157 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:21.685011 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:24.185145 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:21.527945 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:24.028130 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:22.839418 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:22.853700 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:22.853779 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:22.889955 1971155 cri.go:89] found id: ""
	I0120 14:03:22.889984 1971155 logs.go:282] 0 containers: []
	W0120 14:03:22.889992 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:22.889998 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:22.890051 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:22.927006 1971155 cri.go:89] found id: ""
	I0120 14:03:22.927035 1971155 logs.go:282] 0 containers: []
	W0120 14:03:22.927044 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:22.927050 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:22.927114 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:22.964259 1971155 cri.go:89] found id: ""
	I0120 14:03:22.964295 1971155 logs.go:282] 0 containers: []
	W0120 14:03:22.964321 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:22.964330 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:22.964394 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:23.002226 1971155 cri.go:89] found id: ""
	I0120 14:03:23.002259 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.002268 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:23.002274 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:23.002331 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:23.039583 1971155 cri.go:89] found id: ""
	I0120 14:03:23.039620 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.039633 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:23.039641 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:23.039722 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:23.078733 1971155 cri.go:89] found id: ""
	I0120 14:03:23.078761 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.078770 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:23.078803 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:23.078878 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:23.114333 1971155 cri.go:89] found id: ""
	I0120 14:03:23.114390 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.114403 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:23.114411 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:23.114485 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:23.150761 1971155 cri.go:89] found id: ""
	I0120 14:03:23.150797 1971155 logs.go:282] 0 containers: []
	W0120 14:03:23.150809 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:23.150824 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:23.150839 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:23.213320 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:23.213384 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:23.228681 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:23.228717 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:23.301816 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:23.301842 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:23.301858 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:23.387061 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:23.387117 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:25.931823 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:25.945038 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:25.945134 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:25.981262 1971155 cri.go:89] found id: ""
	I0120 14:03:25.981315 1971155 logs.go:282] 0 containers: []
	W0120 14:03:25.981330 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:25.981340 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:25.981420 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:26.018945 1971155 cri.go:89] found id: ""
	I0120 14:03:26.018980 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.018993 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:26.019001 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:26.019080 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:26.060446 1971155 cri.go:89] found id: ""
	I0120 14:03:26.060477 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.060487 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:26.060496 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:26.060563 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:26.097720 1971155 cri.go:89] found id: ""
	I0120 14:03:26.097761 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.097782 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:26.097792 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:26.097861 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:26.133561 1971155 cri.go:89] found id: ""
	I0120 14:03:26.133593 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.133605 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:26.133614 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:26.133701 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:26.175091 1971155 cri.go:89] found id: ""
	I0120 14:03:26.175124 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.175136 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:26.175144 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:26.175206 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:26.214747 1971155 cri.go:89] found id: ""
	I0120 14:03:26.214779 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.214788 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:26.214794 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:26.214864 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:26.264211 1971155 cri.go:89] found id: ""
	I0120 14:03:26.264244 1971155 logs.go:282] 0 containers: []
	W0120 14:03:26.264255 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:26.264269 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:26.264291 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:26.282025 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:26.282062 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:26.359793 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:26.359820 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:26.359842 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:26.447177 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:26.447224 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:26.487488 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:26.487523 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:25.279187 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:27.282700 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:26.186599 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:28.684816 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:26.527177 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:29.026067 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:29.039824 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:29.054535 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:29.054619 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:29.096202 1971155 cri.go:89] found id: ""
	I0120 14:03:29.096233 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.096245 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:29.096254 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:29.096316 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:29.139442 1971155 cri.go:89] found id: ""
	I0120 14:03:29.139475 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.139485 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:29.139492 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:29.139565 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:29.181278 1971155 cri.go:89] found id: ""
	I0120 14:03:29.181320 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.181334 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:29.181343 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:29.181424 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:29.222018 1971155 cri.go:89] found id: ""
	I0120 14:03:29.222049 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.222058 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:29.222072 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:29.222129 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:29.263028 1971155 cri.go:89] found id: ""
	I0120 14:03:29.263071 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.263083 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:29.263092 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:29.263167 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:29.307933 1971155 cri.go:89] found id: ""
	I0120 14:03:29.307965 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.307973 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:29.307980 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:29.308040 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:29.344204 1971155 cri.go:89] found id: ""
	I0120 14:03:29.344237 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.344250 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:29.344258 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:29.344327 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:29.381577 1971155 cri.go:89] found id: ""
	I0120 14:03:29.381604 1971155 logs.go:282] 0 containers: []
	W0120 14:03:29.381613 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:29.381623 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:29.381636 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:29.396553 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:29.396592 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:29.476381 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:29.476406 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:29.476420 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:29.552542 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:29.552586 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:29.597585 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:29.597619 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:32.150749 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:32.166160 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:32.166240 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:32.209621 1971155 cri.go:89] found id: ""
	I0120 14:03:32.209657 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.209671 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:32.209682 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:32.209764 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:32.250347 1971155 cri.go:89] found id: ""
	I0120 14:03:32.250386 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.250397 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:32.250405 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:32.250477 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:32.291555 1971155 cri.go:89] found id: ""
	I0120 14:03:32.291587 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.291599 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:32.291607 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:32.291677 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:32.329975 1971155 cri.go:89] found id: ""
	I0120 14:03:32.330015 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.330023 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:32.330030 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:32.330107 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:32.371131 1971155 cri.go:89] found id: ""
	I0120 14:03:32.371170 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.371190 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:32.371199 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:32.371273 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:32.409613 1971155 cri.go:89] found id: ""
	I0120 14:03:32.409653 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.409665 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:32.409672 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:32.409732 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:29.778719 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:32.279358 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:30.686778 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:33.184968 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:35.185398 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:31.026580 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:33.028333 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:35.527445 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:32.448898 1971155 cri.go:89] found id: ""
	I0120 14:03:32.448932 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.448944 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:32.448953 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:32.449029 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:32.486258 1971155 cri.go:89] found id: ""
	I0120 14:03:32.486295 1971155 logs.go:282] 0 containers: []
	W0120 14:03:32.486308 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:32.486323 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:32.486340 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:32.538196 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:32.538238 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:32.553140 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:32.553173 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:32.640124 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:32.640147 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:32.640161 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:32.725556 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:32.725615 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:35.276962 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:35.292662 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:35.292754 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:35.332066 1971155 cri.go:89] found id: ""
	I0120 14:03:35.332099 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.332111 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:35.332119 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:35.332188 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:35.369977 1971155 cri.go:89] found id: ""
	I0120 14:03:35.370010 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.370024 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:35.370030 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:35.370099 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:35.412630 1971155 cri.go:89] found id: ""
	I0120 14:03:35.412663 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.412672 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:35.412680 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:35.412746 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:35.450785 1971155 cri.go:89] found id: ""
	I0120 14:03:35.450819 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.450830 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:35.450838 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:35.450908 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:35.496877 1971155 cri.go:89] found id: ""
	I0120 14:03:35.496930 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.496943 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:35.496950 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:35.497021 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:35.538626 1971155 cri.go:89] found id: ""
	I0120 14:03:35.538662 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.538675 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:35.538684 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:35.538768 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:35.579144 1971155 cri.go:89] found id: ""
	I0120 14:03:35.579181 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.579195 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:35.579204 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:35.579283 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:35.623935 1971155 cri.go:89] found id: ""
	I0120 14:03:35.623985 1971155 logs.go:282] 0 containers: []
	W0120 14:03:35.623997 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:35.624038 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:35.624074 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:35.664682 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:35.664716 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:35.722441 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:35.722505 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:35.752215 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:35.752246 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:35.843666 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:35.843692 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:35.843706 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:34.778378 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:36.778557 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:37.685015 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:39.689385 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:38.026699 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:40.526689 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:38.427318 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:38.441690 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:38.441767 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:38.481605 1971155 cri.go:89] found id: ""
	I0120 14:03:38.481636 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.481648 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:38.481655 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:38.481726 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:38.518378 1971155 cri.go:89] found id: ""
	I0120 14:03:38.518415 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.518427 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:38.518436 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:38.518512 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:38.561625 1971155 cri.go:89] found id: ""
	I0120 14:03:38.561674 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.561687 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:38.561696 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:38.561764 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:38.603557 1971155 cri.go:89] found id: ""
	I0120 14:03:38.603585 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.603593 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:38.603600 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:38.603671 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:38.644242 1971155 cri.go:89] found id: ""
	I0120 14:03:38.644276 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.644289 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:38.644298 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:38.644364 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:38.686124 1971155 cri.go:89] found id: ""
	I0120 14:03:38.686154 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.686166 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:38.686175 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:38.686257 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:38.731861 1971155 cri.go:89] found id: ""
	I0120 14:03:38.731896 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.731906 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:38.731915 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:38.732002 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:38.773494 1971155 cri.go:89] found id: ""
	I0120 14:03:38.773522 1971155 logs.go:282] 0 containers: []
	W0120 14:03:38.773533 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:38.773579 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:38.773602 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:38.827125 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:38.827168 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:38.841903 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:38.841939 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:38.928392 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:38.928423 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:38.928440 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:39.008227 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:39.008270 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:41.554775 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:41.568912 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:41.568983 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:41.616455 1971155 cri.go:89] found id: ""
	I0120 14:03:41.616483 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.616491 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:41.616505 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:41.616584 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:41.654958 1971155 cri.go:89] found id: ""
	I0120 14:03:41.654995 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.655007 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:41.655014 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:41.655091 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:41.695758 1971155 cri.go:89] found id: ""
	I0120 14:03:41.695800 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.695814 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:41.695824 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:41.695901 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:41.733782 1971155 cri.go:89] found id: ""
	I0120 14:03:41.733815 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.733826 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:41.733834 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:41.733906 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:41.771097 1971155 cri.go:89] found id: ""
	I0120 14:03:41.771129 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.771141 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:41.771150 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:41.771266 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:41.808590 1971155 cri.go:89] found id: ""
	I0120 14:03:41.808629 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.808643 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:41.808652 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:41.808733 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:41.848943 1971155 cri.go:89] found id: ""
	I0120 14:03:41.848971 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.848982 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:41.848990 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:41.849057 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:41.886267 1971155 cri.go:89] found id: ""
	I0120 14:03:41.886302 1971155 logs.go:282] 0 containers: []
	W0120 14:03:41.886315 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:41.886328 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:41.886354 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:41.903471 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:41.903519 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:41.980320 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:41.980342 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:41.980358 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:42.060823 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:42.060868 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:42.102476 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:42.102511 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:39.278753 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:41.778436 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:42.189707 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:44.686641 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:43.026630 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:45.526315 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:44.677081 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:44.691997 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:44.692094 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:44.732561 1971155 cri.go:89] found id: ""
	I0120 14:03:44.732599 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.732611 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:44.732620 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:44.732701 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:44.774215 1971155 cri.go:89] found id: ""
	I0120 14:03:44.774250 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.774259 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:44.774266 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:44.774330 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:44.815997 1971155 cri.go:89] found id: ""
	I0120 14:03:44.816031 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.816040 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:44.816046 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:44.816109 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:44.853946 1971155 cri.go:89] found id: ""
	I0120 14:03:44.853984 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.853996 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:44.854004 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:44.854070 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:44.896969 1971155 cri.go:89] found id: ""
	I0120 14:03:44.897006 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.897018 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:44.897028 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:44.897120 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:44.942458 1971155 cri.go:89] found id: ""
	I0120 14:03:44.942496 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.942508 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:44.942518 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:44.942648 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:44.984028 1971155 cri.go:89] found id: ""
	I0120 14:03:44.984059 1971155 logs.go:282] 0 containers: []
	W0120 14:03:44.984084 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:44.984094 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:44.984173 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:45.026096 1971155 cri.go:89] found id: ""
	I0120 14:03:45.026130 1971155 logs.go:282] 0 containers: []
	W0120 14:03:45.026141 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:45.026153 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:45.026169 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:45.110471 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:45.110527 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:45.154855 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:45.154892 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:45.214465 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:45.214511 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:45.232020 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:45.232054 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:45.312932 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:44.278244 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:46.777269 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:48.777901 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:47.184802 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:49.184874 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:47.526520 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:50.026151 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:47.813923 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:47.828326 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:47.828422 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:47.865843 1971155 cri.go:89] found id: ""
	I0120 14:03:47.865875 1971155 logs.go:282] 0 containers: []
	W0120 14:03:47.865884 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:47.865891 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:47.865952 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:47.913554 1971155 cri.go:89] found id: ""
	I0120 14:03:47.913582 1971155 logs.go:282] 0 containers: []
	W0120 14:03:47.913590 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:47.913597 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:47.913655 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:47.970084 1971155 cri.go:89] found id: ""
	I0120 14:03:47.970115 1971155 logs.go:282] 0 containers: []
	W0120 14:03:47.970135 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:47.970144 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:47.970205 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:48.016631 1971155 cri.go:89] found id: ""
	I0120 14:03:48.016737 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.016750 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:48.016758 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:48.016833 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:48.073208 1971155 cri.go:89] found id: ""
	I0120 14:03:48.073253 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.073266 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:48.073276 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:48.073387 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:48.111638 1971155 cri.go:89] found id: ""
	I0120 14:03:48.111680 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.111692 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:48.111701 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:48.111783 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:48.155605 1971155 cri.go:89] found id: ""
	I0120 14:03:48.155640 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.155653 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:48.155661 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:48.155732 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:48.204162 1971155 cri.go:89] found id: ""
	I0120 14:03:48.204204 1971155 logs.go:282] 0 containers: []
	W0120 14:03:48.204219 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:48.204234 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:48.204257 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:48.259987 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:48.260042 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:48.275801 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:48.275832 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:48.361115 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:48.361150 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:48.361170 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:48.443876 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:48.443921 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:50.992981 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:51.009283 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:51.009370 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:51.052492 1971155 cri.go:89] found id: ""
	I0120 14:03:51.052523 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.052533 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:51.052540 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:51.052616 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:51.096548 1971155 cri.go:89] found id: ""
	I0120 14:03:51.096575 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.096583 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:51.096589 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:51.096655 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:51.138339 1971155 cri.go:89] found id: ""
	I0120 14:03:51.138369 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.138378 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:51.138385 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:51.138456 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:51.181155 1971155 cri.go:89] found id: ""
	I0120 14:03:51.181188 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.181198 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:51.181205 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:51.181261 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:51.223988 1971155 cri.go:89] found id: ""
	I0120 14:03:51.224026 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.224038 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:51.224045 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:51.224106 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:51.261863 1971155 cri.go:89] found id: ""
	I0120 14:03:51.261896 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.261905 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:51.261911 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:51.261976 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:51.303816 1971155 cri.go:89] found id: ""
	I0120 14:03:51.303850 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.303862 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:51.303870 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:51.303946 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:51.340897 1971155 cri.go:89] found id: ""
	I0120 14:03:51.340935 1971155 logs.go:282] 0 containers: []
	W0120 14:03:51.340946 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:51.340960 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:51.340983 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:51.393462 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:51.393512 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:51.409330 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:51.409361 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:51.483485 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:51.483510 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:51.483525 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:51.560879 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:51.560920 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:50.779106 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:53.278544 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:51.185101 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:53.186284 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:55.186474 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:52.026377 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:54.526778 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:54.106090 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:54.121203 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:54.121282 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:54.171790 1971155 cri.go:89] found id: ""
	I0120 14:03:54.171818 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.171826 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:54.171833 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:54.171888 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:54.215021 1971155 cri.go:89] found id: ""
	I0120 14:03:54.215058 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.215069 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:54.215076 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:54.215138 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:54.252537 1971155 cri.go:89] found id: ""
	I0120 14:03:54.252565 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.252573 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:54.252580 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:54.252635 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:54.291366 1971155 cri.go:89] found id: ""
	I0120 14:03:54.291396 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.291405 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:54.291411 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:54.291482 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:54.328162 1971155 cri.go:89] found id: ""
	I0120 14:03:54.328206 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.328219 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:54.328227 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:54.328310 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:54.366862 1971155 cri.go:89] found id: ""
	I0120 14:03:54.366898 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.366908 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:54.366920 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:54.366996 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:54.404501 1971155 cri.go:89] found id: ""
	I0120 14:03:54.404534 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.404543 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:54.404549 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:54.404609 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:54.443468 1971155 cri.go:89] found id: ""
	I0120 14:03:54.443504 1971155 logs.go:282] 0 containers: []
	W0120 14:03:54.443518 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:54.443531 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:54.443554 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:54.458948 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:54.458993 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:54.542353 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:03:54.542379 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:54.542400 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:54.629014 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:54.629060 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:54.673822 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:54.673857 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:57.228212 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:03:57.242552 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:03:57.242667 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:03:57.282187 1971155 cri.go:89] found id: ""
	I0120 14:03:57.282215 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.282225 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:03:57.282232 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:03:57.282306 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:03:57.319233 1971155 cri.go:89] found id: ""
	I0120 14:03:57.319260 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.319268 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:03:57.319279 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:03:57.319340 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:03:57.356706 1971155 cri.go:89] found id: ""
	I0120 14:03:57.356730 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.356738 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:03:57.356744 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:03:57.356805 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:03:57.396553 1971155 cri.go:89] found id: ""
	I0120 14:03:57.396583 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.396594 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:03:57.396600 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:03:57.396657 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:03:55.783799 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:58.278376 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:57.186658 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:59.686959 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:57.027014 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:59.525725 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:03:57.434802 1971155 cri.go:89] found id: ""
	I0120 14:03:57.434835 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.434847 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:03:57.434855 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:03:57.434927 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:03:57.471668 1971155 cri.go:89] found id: ""
	I0120 14:03:57.471699 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.471710 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:03:57.471719 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:03:57.471789 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:03:57.512283 1971155 cri.go:89] found id: ""
	I0120 14:03:57.512318 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.512329 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:03:57.512337 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:03:57.512409 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:03:57.549948 1971155 cri.go:89] found id: ""
	I0120 14:03:57.549977 1971155 logs.go:282] 0 containers: []
	W0120 14:03:57.549986 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:03:57.549996 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:03:57.550010 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:03:57.639160 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:03:57.639213 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:03:57.685920 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:03:57.685954 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:03:57.743891 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:03:57.743935 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:03:57.760181 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:03:57.760223 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:03:57.840777 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:00.342573 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:00.360314 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:00.360397 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:00.407962 1971155 cri.go:89] found id: ""
	I0120 14:04:00.407997 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.408010 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:00.408020 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:00.408086 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:00.450962 1971155 cri.go:89] found id: ""
	I0120 14:04:00.451040 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.451053 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:00.451062 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:00.451129 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:00.487180 1971155 cri.go:89] found id: ""
	I0120 14:04:00.487216 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.487227 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:00.487239 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:00.487311 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:00.530835 1971155 cri.go:89] found id: ""
	I0120 14:04:00.530864 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.530873 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:00.530880 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:00.530948 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:00.570212 1971155 cri.go:89] found id: ""
	I0120 14:04:00.570245 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.570257 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:00.570265 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:00.570335 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:00.611685 1971155 cri.go:89] found id: ""
	I0120 14:04:00.611716 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.611725 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:00.611731 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:00.611785 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:00.649370 1971155 cri.go:89] found id: ""
	I0120 14:04:00.649410 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.649423 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:00.649432 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:00.649498 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:00.685853 1971155 cri.go:89] found id: ""
	I0120 14:04:00.685889 1971155 logs.go:282] 0 containers: []
	W0120 14:04:00.685901 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:00.685915 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:00.685930 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:00.737015 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:00.737051 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:00.751682 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:00.751716 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:00.830222 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:00.830247 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:00.830262 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:00.918955 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:00.919003 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:00.279152 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:02.778569 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:02.185020 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:04.185796 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:01.526915 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:03.529074 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:03.461705 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:03.478063 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:03.478144 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:03.525289 1971155 cri.go:89] found id: ""
	I0120 14:04:03.525326 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.525339 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:03.525349 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:03.525427 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:03.565302 1971155 cri.go:89] found id: ""
	I0120 14:04:03.565339 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.565351 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:03.565360 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:03.565441 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:03.607021 1971155 cri.go:89] found id: ""
	I0120 14:04:03.607048 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.607056 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:03.607063 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:03.607122 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:03.650398 1971155 cri.go:89] found id: ""
	I0120 14:04:03.650425 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.650433 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:03.650445 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:03.650513 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:03.689498 1971155 cri.go:89] found id: ""
	I0120 14:04:03.689531 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.689539 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:03.689545 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:03.689607 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:03.726928 1971155 cri.go:89] found id: ""
	I0120 14:04:03.726965 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.726978 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:03.726987 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:03.727054 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:03.764493 1971155 cri.go:89] found id: ""
	I0120 14:04:03.764532 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.764544 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:03.764552 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:03.764622 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:03.803514 1971155 cri.go:89] found id: ""
	I0120 14:04:03.803550 1971155 logs.go:282] 0 containers: []
	W0120 14:04:03.803562 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:03.803575 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:03.803595 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:03.847009 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:03.847045 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:03.900078 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:03.900124 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:03.916146 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:03.916179 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:03.988068 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:03.988102 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:03.988121 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:06.568829 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:06.583335 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:06.583422 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:06.628247 1971155 cri.go:89] found id: ""
	I0120 14:04:06.628283 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.628296 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:06.628305 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:06.628365 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:06.673764 1971155 cri.go:89] found id: ""
	I0120 14:04:06.673792 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.673804 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:06.673820 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:06.673892 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:06.714328 1971155 cri.go:89] found id: ""
	I0120 14:04:06.714361 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.714373 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:06.714381 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:06.714458 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:06.750935 1971155 cri.go:89] found id: ""
	I0120 14:04:06.750975 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.750987 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:06.750996 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:06.751061 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:06.788944 1971155 cri.go:89] found id: ""
	I0120 14:04:06.788975 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.788982 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:06.788988 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:06.789056 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:06.826176 1971155 cri.go:89] found id: ""
	I0120 14:04:06.826216 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.826228 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:06.826245 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:06.826322 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:06.864607 1971155 cri.go:89] found id: ""
	I0120 14:04:06.864640 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.864649 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:06.864656 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:06.864741 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:06.901814 1971155 cri.go:89] found id: ""
	I0120 14:04:06.901889 1971155 logs.go:282] 0 containers: []
	W0120 14:04:06.901909 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:06.901922 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:06.901944 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:06.953391 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:06.953439 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:06.967876 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:06.967914 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:07.055449 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:07.055486 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:07.055511 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:07.138279 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:07.138328 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:04.778659 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:06.780874 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:06.188401 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:08.685683 1969949 pod_ready.go:103] pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:06.026194 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:08.525961 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:10.527780 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:09.684182 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:09.699353 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:09.699432 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:09.738834 1971155 cri.go:89] found id: ""
	I0120 14:04:09.738864 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.738875 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:09.738883 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:09.738963 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:09.774822 1971155 cri.go:89] found id: ""
	I0120 14:04:09.774852 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.774864 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:09.774872 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:09.774942 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:09.813132 1971155 cri.go:89] found id: ""
	I0120 14:04:09.813167 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.813179 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:09.813187 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:09.813258 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:09.850809 1971155 cri.go:89] found id: ""
	I0120 14:04:09.850844 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.850855 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:09.850864 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:09.850947 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:09.889768 1971155 cri.go:89] found id: ""
	I0120 14:04:09.889802 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.889813 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:09.889821 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:09.889900 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:09.932037 1971155 cri.go:89] found id: ""
	I0120 14:04:09.932073 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.932081 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:09.932087 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:09.932150 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:09.970153 1971155 cri.go:89] found id: ""
	I0120 14:04:09.970197 1971155 logs.go:282] 0 containers: []
	W0120 14:04:09.970210 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:09.970218 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:09.970287 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:10.009506 1971155 cri.go:89] found id: ""
	I0120 14:04:10.009535 1971155 logs.go:282] 0 containers: []
	W0120 14:04:10.009544 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:10.009555 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:10.009568 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:10.097837 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:10.097896 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:10.140488 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:10.140534 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:10.195531 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:10.195575 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:10.210277 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:10.210322 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:10.296146 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:09.279024 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:11.279883 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:13.776738 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:11.178584 1969949 pod_ready.go:82] duration metric: took 4m0.000311545s for pod "metrics-server-f79f97bbb-bp4mx" in "kube-system" namespace to be "Ready" ...
	E0120 14:04:11.178646 1969949 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0120 14:04:11.178676 1969949 pod_ready.go:39] duration metric: took 4m14.547669609s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:04:11.178719 1969949 kubeadm.go:597] duration metric: took 4m22.42355041s to restartPrimaryControlPlane
	W0120 14:04:11.178845 1969949 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:04:11.178885 1969949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:04:13.027079 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:15.027945 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:12.796944 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:12.810984 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:12.811085 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:12.849374 1971155 cri.go:89] found id: ""
	I0120 14:04:12.849413 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.849426 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:12.849435 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:12.849509 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:12.885922 1971155 cri.go:89] found id: ""
	I0120 14:04:12.885951 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.885960 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:12.885967 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:12.886039 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:12.922978 1971155 cri.go:89] found id: ""
	I0120 14:04:12.923019 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.923031 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:12.923040 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:12.923108 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:12.960519 1971155 cri.go:89] found id: ""
	I0120 14:04:12.960563 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.960572 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:12.960578 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:12.960688 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:12.997662 1971155 cri.go:89] found id: ""
	I0120 14:04:12.997702 1971155 logs.go:282] 0 containers: []
	W0120 14:04:12.997715 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:12.997724 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:12.997803 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:13.035613 1971155 cri.go:89] found id: ""
	I0120 14:04:13.035640 1971155 logs.go:282] 0 containers: []
	W0120 14:04:13.035651 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:13.035660 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:13.035736 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:13.073354 1971155 cri.go:89] found id: ""
	I0120 14:04:13.073389 1971155 logs.go:282] 0 containers: []
	W0120 14:04:13.073401 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:13.073410 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:13.073480 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:13.113735 1971155 cri.go:89] found id: ""
	I0120 14:04:13.113771 1971155 logs.go:282] 0 containers: []
	W0120 14:04:13.113780 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:13.113791 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:13.113804 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:13.170858 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:13.170906 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:13.186341 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:13.186375 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:13.260514 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:13.260540 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:13.260557 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:13.347360 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:13.347411 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:15.891859 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:15.907144 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:15.907238 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:15.943638 1971155 cri.go:89] found id: ""
	I0120 14:04:15.943675 1971155 logs.go:282] 0 containers: []
	W0120 14:04:15.943686 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:15.943693 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:15.943753 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:15.981820 1971155 cri.go:89] found id: ""
	I0120 14:04:15.981868 1971155 logs.go:282] 0 containers: []
	W0120 14:04:15.981882 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:15.981891 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:15.981971 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:16.019987 1971155 cri.go:89] found id: ""
	I0120 14:04:16.020058 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.020071 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:16.020080 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:16.020156 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:16.059245 1971155 cri.go:89] found id: ""
	I0120 14:04:16.059278 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.059288 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:16.059295 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:16.059370 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:16.095081 1971155 cri.go:89] found id: ""
	I0120 14:04:16.095123 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.095136 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:16.095146 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:16.095224 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:16.134357 1971155 cri.go:89] found id: ""
	I0120 14:04:16.134403 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.134416 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:16.134425 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:16.134497 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:16.177729 1971155 cri.go:89] found id: ""
	I0120 14:04:16.177762 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.177774 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:16.177783 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:16.177864 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:16.214324 1971155 cri.go:89] found id: ""
	I0120 14:04:16.214360 1971155 logs.go:282] 0 containers: []
	W0120 14:04:16.214371 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:16.214392 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:16.214412 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:16.270670 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:16.270716 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:16.326541 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:16.326587 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:16.343430 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:16.343469 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:16.429522 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:16.429554 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:16.429572 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:15.778836 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:18.279084 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:17.526959 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:20.027030 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:19.008712 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:19.024398 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:19.024489 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:19.064138 1971155 cri.go:89] found id: ""
	I0120 14:04:19.064169 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.064178 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:19.064184 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:19.064253 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:19.102639 1971155 cri.go:89] found id: ""
	I0120 14:04:19.102672 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.102681 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:19.102687 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:19.102755 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:19.141058 1971155 cri.go:89] found id: ""
	I0120 14:04:19.141105 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.141119 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:19.141130 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:19.141200 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:19.179972 1971155 cri.go:89] found id: ""
	I0120 14:04:19.180004 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.180013 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:19.180025 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:19.180095 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:19.219516 1971155 cri.go:89] found id: ""
	I0120 14:04:19.219549 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.219562 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:19.219571 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:19.219634 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:19.262728 1971155 cri.go:89] found id: ""
	I0120 14:04:19.262764 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.262776 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:19.262785 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:19.262860 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:19.299472 1971155 cri.go:89] found id: ""
	I0120 14:04:19.299527 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.299539 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:19.299548 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:19.299634 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:19.341054 1971155 cri.go:89] found id: ""
	I0120 14:04:19.341095 1971155 logs.go:282] 0 containers: []
	W0120 14:04:19.341107 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:19.341119 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:19.341133 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:19.426002 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:19.426058 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:19.469471 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:19.469504 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:19.524625 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:19.524661 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:19.539365 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:19.539398 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:19.620545 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:22.122261 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:22.137515 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:22.137590 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:22.177366 1971155 cri.go:89] found id: ""
	I0120 14:04:22.177405 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.177417 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:22.177426 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:22.177494 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:22.215596 1971155 cri.go:89] found id: ""
	I0120 14:04:22.215641 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.215653 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:22.215662 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:22.215734 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:22.252783 1971155 cri.go:89] found id: ""
	I0120 14:04:22.252820 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.252832 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:22.252841 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:22.252917 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:22.295160 1971155 cri.go:89] found id: ""
	I0120 14:04:22.295199 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.295213 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:22.295221 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:22.295284 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:22.334614 1971155 cri.go:89] found id: ""
	I0120 14:04:22.334651 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.334662 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:22.334672 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:22.334754 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:22.372516 1971155 cri.go:89] found id: ""
	I0120 14:04:22.372545 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.372554 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:22.372562 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:22.372633 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:22.412784 1971155 cri.go:89] found id: ""
	I0120 14:04:22.412819 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.412827 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:22.412833 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:22.412895 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:20.778968 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:22.779314 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:22.526513 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:24.527843 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:22.449865 1971155 cri.go:89] found id: ""
	I0120 14:04:22.449900 1971155 logs.go:282] 0 containers: []
	W0120 14:04:22.449909 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:22.449920 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:22.449934 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:22.464473 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:22.464514 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:22.546804 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:22.546835 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:22.546858 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:22.624614 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:22.624664 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:22.679053 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:22.679085 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:25.238495 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:25.254177 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:25.254253 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:25.299255 1971155 cri.go:89] found id: ""
	I0120 14:04:25.299291 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.299300 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:25.299308 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:25.299373 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:25.337454 1971155 cri.go:89] found id: ""
	I0120 14:04:25.337481 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.337490 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:25.337496 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:25.337556 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:25.375094 1971155 cri.go:89] found id: ""
	I0120 14:04:25.375129 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.375139 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:25.375148 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:25.375224 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:25.413177 1971155 cri.go:89] found id: ""
	I0120 14:04:25.413206 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.413217 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:25.413223 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:25.413288 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:25.448775 1971155 cri.go:89] found id: ""
	I0120 14:04:25.448812 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.448821 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:25.448827 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:25.448883 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:25.484560 1971155 cri.go:89] found id: ""
	I0120 14:04:25.484591 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.484600 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:25.484607 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:25.484660 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:25.522990 1971155 cri.go:89] found id: ""
	I0120 14:04:25.523029 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.523041 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:25.523049 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:25.523128 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:25.560861 1971155 cri.go:89] found id: ""
	I0120 14:04:25.560899 1971155 logs.go:282] 0 containers: []
	W0120 14:04:25.560910 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:25.560925 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:25.560941 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:25.614479 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:25.614528 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:25.630030 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:25.630070 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:25.704721 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:25.704758 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:25.704781 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:25.782265 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:25.782309 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:25.279994 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:27.778659 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:27.027167 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:29.525787 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:28.332905 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:28.351517 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:28.351594 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:28.394070 1971155 cri.go:89] found id: ""
	I0120 14:04:28.394110 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.394122 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:28.394130 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:28.394204 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:28.445893 1971155 cri.go:89] found id: ""
	I0120 14:04:28.445924 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.445934 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:28.445940 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:28.446034 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:28.511766 1971155 cri.go:89] found id: ""
	I0120 14:04:28.511801 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.511811 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:28.511820 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:28.511891 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:28.558333 1971155 cri.go:89] found id: ""
	I0120 14:04:28.558369 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.558382 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:28.558391 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:28.558469 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:28.608161 1971155 cri.go:89] found id: ""
	I0120 14:04:28.608196 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.608207 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:28.608215 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:28.608288 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:28.645545 1971155 cri.go:89] found id: ""
	I0120 14:04:28.645576 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.645585 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:28.645592 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:28.645651 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:28.682795 1971155 cri.go:89] found id: ""
	I0120 14:04:28.682833 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.682845 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:28.682854 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:28.682943 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:28.719887 1971155 cri.go:89] found id: ""
	I0120 14:04:28.719918 1971155 logs.go:282] 0 containers: []
	W0120 14:04:28.719928 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:28.719941 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:28.719965 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:28.776644 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:28.776683 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:28.791778 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:28.791812 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:28.870972 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:28.871001 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:28.871027 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:28.950524 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:28.950568 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:31.494786 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:31.508961 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:31.509041 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:31.550239 1971155 cri.go:89] found id: ""
	I0120 14:04:31.550275 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.550287 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:31.550295 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:31.550374 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:31.589113 1971155 cri.go:89] found id: ""
	I0120 14:04:31.589149 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.589161 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:31.589169 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:31.589271 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:31.626500 1971155 cri.go:89] found id: ""
	I0120 14:04:31.626537 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.626547 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:31.626556 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:31.626655 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:31.661941 1971155 cri.go:89] found id: ""
	I0120 14:04:31.661972 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.661980 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:31.661987 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:31.662079 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:31.699223 1971155 cri.go:89] found id: ""
	I0120 14:04:31.699269 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.699283 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:31.699291 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:31.699359 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:31.736559 1971155 cri.go:89] found id: ""
	I0120 14:04:31.736589 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.736601 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:31.736608 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:31.736680 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:31.774254 1971155 cri.go:89] found id: ""
	I0120 14:04:31.774296 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.774304 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:31.774314 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:31.774460 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:31.813913 1971155 cri.go:89] found id: ""
	I0120 14:04:31.813952 1971155 logs.go:282] 0 containers: []
	W0120 14:04:31.813964 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:31.813977 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:31.813991 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:31.864887 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:31.864936 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:31.880250 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:31.880286 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:31.955208 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:31.955232 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:31.955247 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:32.039812 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:32.039875 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:29.780496 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:32.277638 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:31.526304 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:33.527156 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:34.582127 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:34.595661 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:34.595751 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:34.637306 1971155 cri.go:89] found id: ""
	I0120 14:04:34.637343 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.637355 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:34.637367 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:34.637440 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:34.676881 1971155 cri.go:89] found id: ""
	I0120 14:04:34.676913 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.676924 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:34.676929 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:34.676985 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:34.715677 1971155 cri.go:89] found id: ""
	I0120 14:04:34.715712 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.715723 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:34.715737 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:34.715801 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:34.754821 1971155 cri.go:89] found id: ""
	I0120 14:04:34.754855 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.754867 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:34.754875 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:34.754947 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:34.793093 1971155 cri.go:89] found id: ""
	I0120 14:04:34.793124 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.793133 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:34.793139 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:34.793200 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:34.830252 1971155 cri.go:89] found id: ""
	I0120 14:04:34.830285 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.830295 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:34.830302 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:34.830370 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:34.869405 1971155 cri.go:89] found id: ""
	I0120 14:04:34.869436 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.869447 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:34.869455 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:34.869528 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:34.910676 1971155 cri.go:89] found id: ""
	I0120 14:04:34.910708 1971155 logs.go:282] 0 containers: []
	W0120 14:04:34.910721 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:34.910735 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:34.910751 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:34.961049 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:34.961094 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:34.976224 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:34.976260 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:35.049407 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:35.049434 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:35.049452 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:35.133338 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:35.133396 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:34.279211 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:36.778511 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:39.032716 1969949 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.853801532s)
	I0120 14:04:39.032805 1969949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:04:39.056153 1969949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:04:39.077937 1969949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:04:39.097957 1969949 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:04:39.097986 1969949 kubeadm.go:157] found existing configuration files:
	
	I0120 14:04:39.098074 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:04:39.127178 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:04:39.127249 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:04:39.140640 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:04:39.152447 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:04:39.152516 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:04:39.174543 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:04:39.185436 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:04:39.185521 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:04:39.196720 1969949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:04:39.207028 1969949 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:04:39.207105 1969949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:04:39.217474 1969949 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:04:39.273124 1969949 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 14:04:39.273208 1969949 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:04:39.402646 1969949 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:04:39.402821 1969949 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:04:39.402964 1969949 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 14:04:39.411696 1969949 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:04:39.413689 1969949 out.go:235]   - Generating certificates and keys ...
	I0120 14:04:39.413807 1969949 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:04:39.413895 1969949 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:04:39.414021 1969949 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:04:39.414131 1969949 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:04:39.414240 1969949 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:04:39.414333 1969949 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:04:39.414455 1969949 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:04:39.414538 1969949 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:04:39.414693 1969949 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:04:39.414814 1969949 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:04:39.414881 1969949 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:04:39.414976 1969949 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:04:39.516867 1969949 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:04:39.700148 1969949 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 14:04:39.838568 1969949 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:04:40.020807 1969949 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:04:40.083569 1969949 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:04:40.083953 1969949 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:04:40.086599 1969949 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:04:40.088383 1969949 out.go:235]   - Booting up control plane ...
	I0120 14:04:40.088515 1969949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:04:40.090041 1969949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:04:40.092450 1969949 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:04:40.114859 1969949 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:04:40.124692 1969949 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:04:40.124773 1969949 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:04:36.025541 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:38.027612 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:40.528385 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:37.676133 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:37.690435 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:37.690520 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:37.732788 1971155 cri.go:89] found id: ""
	I0120 14:04:37.732824 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.732837 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:37.732846 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:37.732914 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:37.770338 1971155 cri.go:89] found id: ""
	I0120 14:04:37.770375 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.770387 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:37.770395 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:37.770461 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:37.813580 1971155 cri.go:89] found id: ""
	I0120 14:04:37.813612 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.813639 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:37.813645 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:37.813702 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:37.854706 1971155 cri.go:89] found id: ""
	I0120 14:04:37.854740 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.854751 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:37.854759 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:37.854841 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:37.891577 1971155 cri.go:89] found id: ""
	I0120 14:04:37.891607 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.891616 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:37.891623 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:37.891681 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:37.928718 1971155 cri.go:89] found id: ""
	I0120 14:04:37.928750 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.928762 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:37.928772 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:37.928844 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:37.964166 1971155 cri.go:89] found id: ""
	I0120 14:04:37.964203 1971155 logs.go:282] 0 containers: []
	W0120 14:04:37.964211 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:37.964218 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:37.964279 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:38.005257 1971155 cri.go:89] found id: ""
	I0120 14:04:38.005299 1971155 logs.go:282] 0 containers: []
	W0120 14:04:38.005311 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:38.005325 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:38.005340 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:38.058706 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:38.058756 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:38.073507 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:38.073584 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:38.149050 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:38.149073 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:38.149091 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:38.227105 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:38.227163 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:40.772041 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:40.787399 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:40.787471 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:40.828186 1971155 cri.go:89] found id: ""
	I0120 14:04:40.828226 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.828247 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:40.828257 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:40.828327 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:40.869532 1971155 cri.go:89] found id: ""
	I0120 14:04:40.869561 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.869573 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:40.869581 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:40.869670 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:40.916288 1971155 cri.go:89] found id: ""
	I0120 14:04:40.916324 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.916343 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:40.916357 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:40.916425 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:40.953018 1971155 cri.go:89] found id: ""
	I0120 14:04:40.953053 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.953066 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:40.953076 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:40.953150 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:40.993977 1971155 cri.go:89] found id: ""
	I0120 14:04:40.994012 1971155 logs.go:282] 0 containers: []
	W0120 14:04:40.994024 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:40.994033 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:40.994104 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:41.037652 1971155 cri.go:89] found id: ""
	I0120 14:04:41.037678 1971155 logs.go:282] 0 containers: []
	W0120 14:04:41.037685 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:41.037692 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:41.037756 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:41.085826 1971155 cri.go:89] found id: ""
	I0120 14:04:41.085855 1971155 logs.go:282] 0 containers: []
	W0120 14:04:41.085864 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:41.085873 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:41.085950 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:41.128902 1971155 cri.go:89] found id: ""
	I0120 14:04:41.128939 1971155 logs.go:282] 0 containers: []
	W0120 14:04:41.128951 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:41.128965 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:41.128984 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:41.182933 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:41.182976 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:41.198454 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:41.198493 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:41.278062 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:41.278090 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:41.278106 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:41.359935 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:41.359983 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:39.279853 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:41.778833 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:43.779056 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:40.281534 1969949 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 14:04:40.281697 1969949 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 14:04:41.283107 1969949 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001641988s
	I0120 14:04:41.283223 1969949 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 14:04:43.026341 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:45.027225 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:46.784985 1969949 kubeadm.go:310] [api-check] The API server is healthy after 5.501686403s
	I0120 14:04:46.800497 1969949 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 14:04:46.826466 1969949 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 14:04:46.872907 1969949 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 14:04:46.873201 1969949 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-648067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 14:04:46.893113 1969949 kubeadm.go:310] [bootstrap-token] Using token: hll471.vkmzt8kk1d060cyb
	I0120 14:04:43.908548 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:43.927397 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:43.927492 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:43.975131 1971155 cri.go:89] found id: ""
	I0120 14:04:43.975160 1971155 logs.go:282] 0 containers: []
	W0120 14:04:43.975169 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:43.975175 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:43.975243 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:44.020970 1971155 cri.go:89] found id: ""
	I0120 14:04:44.021006 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.021018 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:44.021027 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:44.021135 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:44.067873 1971155 cri.go:89] found id: ""
	I0120 14:04:44.067914 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.067927 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:44.067936 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:44.068010 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:44.108047 1971155 cri.go:89] found id: ""
	I0120 14:04:44.108082 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.108093 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:44.108099 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:44.108161 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:44.149416 1971155 cri.go:89] found id: ""
	I0120 14:04:44.149449 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.149458 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:44.149466 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:44.149521 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:44.189664 1971155 cri.go:89] found id: ""
	I0120 14:04:44.189701 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.189712 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:44.189720 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:44.189787 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:44.233518 1971155 cri.go:89] found id: ""
	I0120 14:04:44.233548 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.233558 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:44.233565 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:44.233635 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:44.279568 1971155 cri.go:89] found id: ""
	I0120 14:04:44.279603 1971155 logs.go:282] 0 containers: []
	W0120 14:04:44.279614 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:44.279626 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:44.279641 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:44.348693 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:44.348742 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:44.363510 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:44.363546 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:44.437107 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:44.437132 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:44.437146 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:44.516463 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:44.516512 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:47.065723 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:47.081983 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:47.082120 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:47.122906 1971155 cri.go:89] found id: ""
	I0120 14:04:47.122945 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.122958 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:47.122969 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:47.123060 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:47.166879 1971155 cri.go:89] found id: ""
	I0120 14:04:47.166916 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.166928 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:47.166937 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:47.167012 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:47.213675 1971155 cri.go:89] found id: ""
	I0120 14:04:47.213706 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.213715 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:47.213722 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:47.213778 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:47.254655 1971155 cri.go:89] found id: ""
	I0120 14:04:47.254692 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.254702 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:47.254711 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:47.254787 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:47.297680 1971155 cri.go:89] found id: ""
	I0120 14:04:47.297718 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.297731 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:47.297741 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:47.297829 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:47.337150 1971155 cri.go:89] found id: ""
	I0120 14:04:47.337179 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.337188 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:47.337194 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:47.337258 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:47.376190 1971155 cri.go:89] found id: ""
	I0120 14:04:47.376223 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.376234 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:47.376242 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:47.376343 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:47.424425 1971155 cri.go:89] found id: ""
	I0120 14:04:47.424465 1971155 logs.go:282] 0 containers: []
	W0120 14:04:47.424477 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:47.424491 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:47.424508 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:46.894672 1969949 out.go:235]   - Configuring RBAC rules ...
	I0120 14:04:46.894865 1969949 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 14:04:46.901221 1969949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 14:04:46.911875 1969949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 14:04:46.916856 1969949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 14:04:46.922245 1969949 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 14:04:46.929769 1969949 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 14:04:47.194825 1969949 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 14:04:47.629977 1969949 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 14:04:48.194241 1969949 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 14:04:48.195072 1969949 kubeadm.go:310] 
	I0120 14:04:48.195176 1969949 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 14:04:48.195193 1969949 kubeadm.go:310] 
	I0120 14:04:48.195309 1969949 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 14:04:48.195319 1969949 kubeadm.go:310] 
	I0120 14:04:48.195353 1969949 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 14:04:48.195444 1969949 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 14:04:48.195583 1969949 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 14:04:48.195610 1969949 kubeadm.go:310] 
	I0120 14:04:48.195693 1969949 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 14:04:48.195705 1969949 kubeadm.go:310] 
	I0120 14:04:48.195767 1969949 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 14:04:48.195776 1969949 kubeadm.go:310] 
	I0120 14:04:48.195891 1969949 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 14:04:48.196003 1969949 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 14:04:48.196119 1969949 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 14:04:48.196143 1969949 kubeadm.go:310] 
	I0120 14:04:48.196264 1969949 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 14:04:48.196353 1969949 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 14:04:48.196374 1969949 kubeadm.go:310] 
	I0120 14:04:48.196486 1969949 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hll471.vkmzt8kk1d060cyb \
	I0120 14:04:48.196623 1969949 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 \
	I0120 14:04:48.196658 1969949 kubeadm.go:310] 	--control-plane 
	I0120 14:04:48.196668 1969949 kubeadm.go:310] 
	I0120 14:04:48.196788 1969949 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 14:04:48.196797 1969949 kubeadm.go:310] 
	I0120 14:04:48.196887 1969949 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hll471.vkmzt8kk1d060cyb \
	I0120 14:04:48.196999 1969949 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 
	I0120 14:04:48.198034 1969949 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:04:48.198074 1969949 cni.go:84] Creating CNI manager for ""
	I0120 14:04:48.198087 1969949 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:04:48.199935 1969949 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:04:46.278851 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:48.279224 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:48.201356 1969949 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:04:48.213317 1969949 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:04:48.232194 1969949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:04:48.232407 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-648067 minikube.k8s.io/updated_at=2025_01_20T14_04_48_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=no-preload-648067 minikube.k8s.io/primary=true
	I0120 14:04:48.232407 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:48.270777 1969949 ops.go:34] apiserver oom_adj: -16
	I0120 14:04:48.458517 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:48.959588 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:49.459308 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:49.958914 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:47.529098 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:50.025867 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:47.439773 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:47.439807 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:47.515012 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:47.515040 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:47.515077 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:47.602215 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:47.602253 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:47.647880 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:47.647910 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:50.211849 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:50.225773 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:50.225855 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:50.268626 1971155 cri.go:89] found id: ""
	I0120 14:04:50.268663 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.268676 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:50.268686 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:50.268759 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:50.307523 1971155 cri.go:89] found id: ""
	I0120 14:04:50.307562 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.307575 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:50.307584 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:50.307656 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:50.347783 1971155 cri.go:89] found id: ""
	I0120 14:04:50.347820 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.347832 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:50.347840 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:50.347910 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:50.394427 1971155 cri.go:89] found id: ""
	I0120 14:04:50.394462 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.394474 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:50.394482 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:50.394564 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:50.434136 1971155 cri.go:89] found id: ""
	I0120 14:04:50.434168 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.434178 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:50.434187 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:50.434253 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:50.472220 1971155 cri.go:89] found id: ""
	I0120 14:04:50.472256 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.472268 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:50.472277 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:50.472341 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:50.513511 1971155 cri.go:89] found id: ""
	I0120 14:04:50.513541 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.513552 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:50.513560 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:50.513630 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:50.551073 1971155 cri.go:89] found id: ""
	I0120 14:04:50.551110 1971155 logs.go:282] 0 containers: []
	W0120 14:04:50.551121 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:50.551143 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:50.551163 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:50.565714 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:50.565744 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:50.651186 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:50.651214 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:50.651238 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:50.735185 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:50.735234 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:50.780258 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:50.780287 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:50.459078 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:50.958680 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:51.459194 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:51.958693 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:52.459624 1969949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:04:52.569627 1969949 kubeadm.go:1113] duration metric: took 4.337296975s to wait for elevateKubeSystemPrivileges
	I0120 14:04:52.569667 1969949 kubeadm.go:394] duration metric: took 5m3.880867579s to StartCluster
	I0120 14:04:52.569696 1969949 settings.go:142] acquiring lock: {Name:mka51395421b81550a6a78e164d8aa21a9ede347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:04:52.569799 1969949 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:04:52.571249 1969949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:04:52.571569 1969949 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.76 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:04:52.571705 1969949 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:04:52.571794 1969949 addons.go:69] Setting storage-provisioner=true in profile "no-preload-648067"
	I0120 14:04:52.571819 1969949 addons.go:238] Setting addon storage-provisioner=true in "no-preload-648067"
	I0120 14:04:52.571819 1969949 addons.go:69] Setting default-storageclass=true in profile "no-preload-648067"
	W0120 14:04:52.571832 1969949 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:04:52.571833 1969949 addons.go:69] Setting metrics-server=true in profile "no-preload-648067"
	I0120 14:04:52.571850 1969949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-648067"
	I0120 14:04:52.571858 1969949 addons.go:238] Setting addon metrics-server=true in "no-preload-648067"
	W0120 14:04:52.571867 1969949 addons.go:247] addon metrics-server should already be in state true
	I0120 14:04:52.571861 1969949 addons.go:69] Setting dashboard=true in profile "no-preload-648067"
	I0120 14:04:52.571895 1969949 host.go:66] Checking if "no-preload-648067" exists ...
	I0120 14:04:52.571904 1969949 addons.go:238] Setting addon dashboard=true in "no-preload-648067"
	W0120 14:04:52.571919 1969949 addons.go:247] addon dashboard should already be in state true
	I0120 14:04:52.571873 1969949 host.go:66] Checking if "no-preload-648067" exists ...
	I0120 14:04:52.571957 1969949 host.go:66] Checking if "no-preload-648067" exists ...
	I0120 14:04:52.571816 1969949 config.go:182] Loaded profile config "no-preload-648067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:04:52.572249 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.572310 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.572388 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.572402 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.572429 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.572437 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.572388 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.572514 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.573278 1969949 out.go:177] * Verifying Kubernetes components...
	I0120 14:04:52.574697 1969949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:04:52.593445 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35109
	I0120 14:04:52.593972 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40497
	I0120 14:04:52.594196 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38403
	I0120 14:04:52.594251 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.594311 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0120 14:04:52.594456 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.594699 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.594819 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.595051 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.595058 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.595072 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.595075 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.595878 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.595883 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.595967 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.595978 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.595992 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.595994 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.596089 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.596460 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.596493 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.596495 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.596537 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.597392 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.597458 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.597937 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.597987 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.601273 1969949 addons.go:238] Setting addon default-storageclass=true in "no-preload-648067"
	W0120 14:04:52.601293 1969949 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:04:52.601328 1969949 host.go:66] Checking if "no-preload-648067" exists ...
	I0120 14:04:52.601665 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.601709 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.615800 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36643
	I0120 14:04:52.616400 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.617008 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.617030 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.617408 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.617522 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
	I0120 14:04:52.617864 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.618536 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.619193 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.619209 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.619284 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41705
	I0120 14:04:52.619647 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.619726 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.619909 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.620278 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.620296 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.620825 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.620943 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0120 14:04:52.621206 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.622123 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 14:04:52.622176 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.622220 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 14:04:52.623015 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 14:04:52.623665 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.623691 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.624470 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.625095 1969949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:04:52.625143 1969949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:04:52.625528 1969949 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:04:52.625540 1969949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:04:52.625550 1969949 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:04:52.627935 1969949 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:04:50.279663 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:52.280483 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:52.627964 1969949 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:04:52.627983 1969949 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:04:52.628010 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 14:04:52.628135 1969949 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:04:52.628150 1969949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:04:52.628172 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 14:04:52.629358 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:04:52.629377 1969949 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:04:52.629400 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 14:04:52.632446 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.633059 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.633123 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 14:04:52.633132 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 14:04:52.633166 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.633329 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 14:04:52.633372 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 14:04:52.633419 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.633507 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 14:04:52.633561 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 14:04:52.633761 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 14:04:52.634098 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.634129 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 14:04:52.634291 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 14:04:52.634635 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 14:04:52.634792 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 14:04:52.634816 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.635030 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 14:04:52.635288 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 14:04:52.635523 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 14:04:52.635673 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 14:04:52.649363 1969949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46053
	I0120 14:04:52.649962 1969949 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:04:52.650624 1969949 main.go:141] libmachine: Using API Version  1
	I0120 14:04:52.650650 1969949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:04:52.651046 1969949 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:04:52.651360 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetState
	I0120 14:04:52.653362 1969949 main.go:141] libmachine: (no-preload-648067) Calling .DriverName
	I0120 14:04:52.653620 1969949 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:04:52.653637 1969949 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:04:52.653657 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHHostname
	I0120 14:04:52.656950 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.657430 1969949 main.go:141] libmachine: (no-preload-648067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:3b:04", ip: ""} in network mk-no-preload-648067: {Iface:virbr1 ExpiryTime:2025-01-20 14:59:22 +0000 UTC Type:0 Mac:52:54:00:cb:3b:04 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:no-preload-648067 Clientid:01:52:54:00:cb:3b:04}
	I0120 14:04:52.657459 1969949 main.go:141] libmachine: (no-preload-648067) DBG | domain no-preload-648067 has defined IP address 192.168.39.76 and MAC address 52:54:00:cb:3b:04 in network mk-no-preload-648067
	I0120 14:04:52.657671 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHPort
	I0120 14:04:52.658472 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHKeyPath
	I0120 14:04:52.658685 1969949 main.go:141] libmachine: (no-preload-648067) Calling .GetSSHUsername
	I0120 14:04:52.658860 1969949 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/no-preload-648067/id_rsa Username:docker}
	I0120 14:04:52.827213 1969949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:04:52.892209 1969949 node_ready.go:35] waiting up to 6m0s for node "no-preload-648067" to be "Ready" ...
	I0120 14:04:52.927742 1969949 node_ready.go:49] node "no-preload-648067" has status "Ready":"True"
	I0120 14:04:52.927778 1969949 node_ready.go:38] duration metric: took 35.520382ms for node "no-preload-648067" to be "Ready" ...
	I0120 14:04:52.927792 1969949 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:04:52.945134 1969949 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace to be "Ready" ...
	I0120 14:04:52.998630 1969949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:04:53.015208 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:04:53.015251 1969949 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:04:53.050964 1969949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:04:53.053498 1969949 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:04:53.053531 1969949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:04:53.131884 1969949 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:04:53.131915 1969949 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:04:53.156697 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:04:53.156734 1969949 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:04:53.267300 1969949 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:04:53.267329 1969949 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:04:53.267739 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:04:53.267765 1969949 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:04:53.452299 1969949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:04:53.456705 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:53.456735 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:53.457124 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:53.457209 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:53.457135 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:53.457264 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:53.457356 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:53.457651 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:53.457667 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:53.461528 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:04:53.461555 1969949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:04:53.471471 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:53.471505 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:53.471848 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:53.471864 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:53.515363 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:04:53.515398 1969949 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 14:04:53.636963 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:04:53.637001 1969949 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:04:53.840979 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:04:53.841011 1969949 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:04:53.959045 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:04:53.959082 1969949 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 14:04:54.051582 1969949 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:04:54.051618 1969949 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 14:04:54.170664 1969949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:04:54.682801 1969949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.631779213s)
	I0120 14:04:54.682872 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:54.682887 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:54.683248 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:54.683271 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:54.683286 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:54.683296 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:54.683571 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:54.683595 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:54.683577 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:54.982997 1969949 pod_ready.go:103] pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:55.132956 1969949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.680599793s)
	I0120 14:04:55.133021 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:55.133038 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:55.133526 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:55.133549 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:55.133560 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:55.133568 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:55.133526 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:55.133807 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:55.133831 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:55.133847 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:55.133867 1969949 addons.go:479] Verifying addon metrics-server=true in "no-preload-648067"
	I0120 14:04:52.026070 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:54.026722 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:55.971683 1969949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.800920116s)
	I0120 14:04:55.971747 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:55.971763 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:55.972123 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:55.972144 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:55.972155 1969949 main.go:141] libmachine: Making call to close driver server
	I0120 14:04:55.972163 1969949 main.go:141] libmachine: (no-preload-648067) Calling .Close
	I0120 14:04:55.972460 1969949 main.go:141] libmachine: (no-preload-648067) DBG | Closing plugin on server side
	I0120 14:04:55.973844 1969949 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:04:55.973867 1969949 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:04:55.975729 1969949 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-648067 addons enable metrics-server
	
	I0120 14:04:55.977469 1969949 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 14:04:53.331081 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:53.346851 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:53.346935 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:53.390862 1971155 cri.go:89] found id: ""
	I0120 14:04:53.390901 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.390915 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:53.390924 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:53.391007 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:53.433455 1971155 cri.go:89] found id: ""
	I0120 14:04:53.433482 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.433491 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:53.433497 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:53.433555 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:53.477771 1971155 cri.go:89] found id: ""
	I0120 14:04:53.477805 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.477817 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:53.477826 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:53.477898 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:53.518330 1971155 cri.go:89] found id: ""
	I0120 14:04:53.518365 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.518375 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:53.518384 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:53.518461 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:53.557755 1971155 cri.go:89] found id: ""
	I0120 14:04:53.557804 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.557817 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:53.557827 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:53.557907 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:53.600681 1971155 cri.go:89] found id: ""
	I0120 14:04:53.600718 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.600730 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:53.600739 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:53.600836 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:53.644255 1971155 cri.go:89] found id: ""
	I0120 14:04:53.644291 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.644302 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:53.644311 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:53.644398 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:53.681445 1971155 cri.go:89] found id: ""
	I0120 14:04:53.681485 1971155 logs.go:282] 0 containers: []
	W0120 14:04:53.681498 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:53.681513 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:53.681529 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:53.737076 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:53.737131 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:53.755500 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:53.755551 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:53.846378 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:53.846416 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:53.846435 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:53.956291 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:53.956337 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:56.505456 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:56.521259 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:56.521352 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:56.572379 1971155 cri.go:89] found id: ""
	I0120 14:04:56.572415 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.572427 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:56.572435 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:56.572503 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:56.613123 1971155 cri.go:89] found id: ""
	I0120 14:04:56.613151 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.613162 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:56.613170 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:56.613237 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:56.650863 1971155 cri.go:89] found id: ""
	I0120 14:04:56.650896 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.650904 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:56.650911 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:56.650967 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:56.686709 1971155 cri.go:89] found id: ""
	I0120 14:04:56.686741 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.686749 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:56.686756 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:56.686813 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:56.722765 1971155 cri.go:89] found id: ""
	I0120 14:04:56.722794 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.722802 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:56.722809 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:56.722867 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:56.762188 1971155 cri.go:89] found id: ""
	I0120 14:04:56.762223 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.762235 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:56.762244 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:56.762321 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:56.807714 1971155 cri.go:89] found id: ""
	I0120 14:04:56.807741 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.807750 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:56.807756 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:56.807818 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:56.846817 1971155 cri.go:89] found id: ""
	I0120 14:04:56.846851 1971155 logs.go:282] 0 containers: []
	W0120 14:04:56.846860 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:56.846870 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:04:56.846884 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:04:56.919562 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:04:56.919593 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:56.919613 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:04:57.007957 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:04:57.008011 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:04:57.051295 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:04:57.051339 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:04:57.104114 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:04:57.104172 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:04:54.779036 1970602 pod_ready.go:103] pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:56.272135 1970602 pod_ready.go:82] duration metric: took 4m0.000512351s for pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace to be "Ready" ...
	E0120 14:04:56.272179 1970602 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-gx5f6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 14:04:56.272203 1970602 pod_ready.go:39] duration metric: took 4m14.631982517s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:04:56.272284 1970602 kubeadm.go:597] duration metric: took 4m21.961665482s to restartPrimaryControlPlane
	W0120 14:04:56.272373 1970602 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:04:56.272404 1970602 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:04:55.979014 1969949 addons.go:514] duration metric: took 3.407316682s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 14:04:57.451990 1969949 pod_ready.go:103] pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:59.452924 1969949 pod_ready.go:103] pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:56.527827 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:59.026535 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:04:59.620229 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:04:59.637010 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:04:59.637114 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:04:59.680984 1971155 cri.go:89] found id: ""
	I0120 14:04:59.681020 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.681032 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:04:59.681041 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:04:59.681128 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:04:59.725445 1971155 cri.go:89] found id: ""
	I0120 14:04:59.725480 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.725492 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:04:59.725501 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:04:59.725573 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:04:59.767962 1971155 cri.go:89] found id: ""
	I0120 14:04:59.767999 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.768012 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:04:59.768020 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:04:59.768091 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:04:59.812201 1971155 cri.go:89] found id: ""
	I0120 14:04:59.812240 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.812252 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:04:59.812267 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:04:59.812335 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:04:59.853005 1971155 cri.go:89] found id: ""
	I0120 14:04:59.853034 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.853043 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:04:59.853049 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:04:59.853131 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:04:59.890747 1971155 cri.go:89] found id: ""
	I0120 14:04:59.890859 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.890878 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:04:59.890889 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:04:59.890969 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:04:59.934050 1971155 cri.go:89] found id: ""
	I0120 14:04:59.934090 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.934102 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:04:59.934110 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:04:59.934179 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:04:59.977069 1971155 cri.go:89] found id: ""
	I0120 14:04:59.977106 1971155 logs.go:282] 0 containers: []
	W0120 14:04:59.977119 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:04:59.977131 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:04:59.977150 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:00.070208 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:00.070261 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:00.116521 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:00.116557 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:00.175645 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:00.175695 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:00.192183 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:00.192228 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:00.273233 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:01.452480 1969949 pod_ready.go:93] pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.452519 1969949 pod_ready.go:82] duration metric: took 8.507352286s for pod "coredns-668d6bf9bc-2fbd7" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.452534 1969949 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-86xhz" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.458456 1969949 pod_ready.go:93] pod "coredns-668d6bf9bc-86xhz" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.458488 1969949 pod_ready.go:82] duration metric: took 5.941966ms for pod "coredns-668d6bf9bc-86xhz" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.458503 1969949 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.465708 1969949 pod_ready.go:93] pod "etcd-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.465733 1969949 pod_ready.go:82] duration metric: took 7.221959ms for pod "etcd-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.465745 1969949 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.473764 1969949 pod_ready.go:93] pod "kube-apiserver-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.473796 1969949 pod_ready.go:82] duration metric: took 8.041648ms for pod "kube-apiserver-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.473815 1969949 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.480463 1969949 pod_ready.go:93] pod "kube-controller-manager-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.480494 1969949 pod_ready.go:82] duration metric: took 6.670074ms for pod "kube-controller-manager-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.480508 1969949 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kr6tq" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.849787 1969949 pod_ready.go:93] pod "kube-proxy-kr6tq" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:01.849820 1969949 pod_ready.go:82] duration metric: took 369.302403ms for pod "kube-proxy-kr6tq" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:01.849834 1969949 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:02.250242 1969949 pod_ready.go:93] pod "kube-scheduler-no-preload-648067" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:02.250279 1969949 pod_ready.go:82] duration metric: took 400.436958ms for pod "kube-scheduler-no-preload-648067" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:02.250289 1969949 pod_ready.go:39] duration metric: took 9.322472589s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:02.250305 1969949 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:05:02.250373 1969949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:02.307690 1969949 api_server.go:72] duration metric: took 9.736077102s to wait for apiserver process to appear ...
	I0120 14:05:02.307725 1969949 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:05:02.307751 1969949 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8443/healthz ...
	I0120 14:05:02.312837 1969949 api_server.go:279] https://192.168.39.76:8443/healthz returned 200:
	ok
	I0120 14:05:02.314012 1969949 api_server.go:141] control plane version: v1.32.0
	I0120 14:05:02.314038 1969949 api_server.go:131] duration metric: took 6.305469ms to wait for apiserver health ...
	I0120 14:05:02.314047 1969949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:05:02.454048 1969949 system_pods.go:59] 9 kube-system pods found
	I0120 14:05:02.454092 1969949 system_pods.go:61] "coredns-668d6bf9bc-2fbd7" [d2cf52fe-b375-47cd-a4bf-d7ef8e07a4b7] Running
	I0120 14:05:02.454099 1969949 system_pods.go:61] "coredns-668d6bf9bc-86xhz" [4af72226-8186-40e7-a923-01381cc52731] Running
	I0120 14:05:02.454104 1969949 system_pods.go:61] "etcd-no-preload-648067" [87debb8b-80bc-41cc-91f3-7b905ab8177c] Running
	I0120 14:05:02.454109 1969949 system_pods.go:61] "kube-apiserver-no-preload-648067" [6b1f5f1b-67ae-4ab2-a186-1c5224fcbc4e] Running
	I0120 14:05:02.454114 1969949 system_pods.go:61] "kube-controller-manager-no-preload-648067" [1bf90869-71a8-4459-a1b8-b59f78af8a8b] Running
	I0120 14:05:02.454119 1969949 system_pods.go:61] "kube-proxy-kr6tq" [462ab3d1-c225-4319-bac8-926a1e43a14d] Running
	I0120 14:05:02.454125 1969949 system_pods.go:61] "kube-scheduler-no-preload-648067" [38edfe65-9c58-4a24-b108-c22846010b97] Running
	I0120 14:05:02.454136 1969949 system_pods.go:61] "metrics-server-f79f97bbb-9kb5f" [fb8dd9df-cd37-4779-af22-4abd91dbc421] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:05:02.454144 1969949 system_pods.go:61] "storage-provisioner" [12bde765-1258-4689-b448-64208dd30638] Running
	I0120 14:05:02.454158 1969949 system_pods.go:74] duration metric: took 140.103109ms to wait for pod list to return data ...
	I0120 14:05:02.454172 1969949 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:05:02.650007 1969949 default_sa.go:45] found service account: "default"
	I0120 14:05:02.650050 1969949 default_sa.go:55] duration metric: took 195.869128ms for default service account to be created ...
	I0120 14:05:02.650064 1969949 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:05:02.853144 1969949 system_pods.go:87] 9 kube-system pods found
	I0120 14:05:01.028886 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:03.526512 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:05.527941 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:02.773877 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:02.788560 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:02.788661 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:02.838025 1971155 cri.go:89] found id: ""
	I0120 14:05:02.838061 1971155 logs.go:282] 0 containers: []
	W0120 14:05:02.838073 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:02.838082 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:02.838152 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:02.879106 1971155 cri.go:89] found id: ""
	I0120 14:05:02.879139 1971155 logs.go:282] 0 containers: []
	W0120 14:05:02.879150 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:02.879158 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:02.879226 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:02.919842 1971155 cri.go:89] found id: ""
	I0120 14:05:02.919883 1971155 logs.go:282] 0 containers: []
	W0120 14:05:02.919896 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:02.919905 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:02.919978 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:02.959612 1971155 cri.go:89] found id: ""
	I0120 14:05:02.959644 1971155 logs.go:282] 0 containers: []
	W0120 14:05:02.959656 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:02.959664 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:02.959737 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:03.018360 1971155 cri.go:89] found id: ""
	I0120 14:05:03.018392 1971155 logs.go:282] 0 containers: []
	W0120 14:05:03.018401 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:03.018408 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:03.018491 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:03.064749 1971155 cri.go:89] found id: ""
	I0120 14:05:03.064779 1971155 logs.go:282] 0 containers: []
	W0120 14:05:03.064788 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:03.064801 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:03.064874 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:03.114566 1971155 cri.go:89] found id: ""
	I0120 14:05:03.114595 1971155 logs.go:282] 0 containers: []
	W0120 14:05:03.114617 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:03.114626 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:03.114695 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:03.163672 1971155 cri.go:89] found id: ""
	I0120 14:05:03.163707 1971155 logs.go:282] 0 containers: []
	W0120 14:05:03.163720 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:03.163733 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:03.163750 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:03.243662 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:03.243718 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:03.261586 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:03.261629 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:03.358343 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:03.358377 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:03.358393 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:03.452803 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:03.452852 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:06.004224 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:06.019368 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:06.019459 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:06.068617 1971155 cri.go:89] found id: ""
	I0120 14:05:06.068655 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.068668 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:06.068678 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:06.068747 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:06.112806 1971155 cri.go:89] found id: ""
	I0120 14:05:06.112859 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.112874 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:06.112883 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:06.112960 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:06.150653 1971155 cri.go:89] found id: ""
	I0120 14:05:06.150695 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.150708 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:06.150716 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:06.150788 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:06.190915 1971155 cri.go:89] found id: ""
	I0120 14:05:06.190958 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.190973 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:06.190992 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:06.191077 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:06.237577 1971155 cri.go:89] found id: ""
	I0120 14:05:06.237616 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.237627 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:06.237636 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:06.237712 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:06.280826 1971155 cri.go:89] found id: ""
	I0120 14:05:06.280861 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.280873 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:06.280883 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:06.280958 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:06.317836 1971155 cri.go:89] found id: ""
	I0120 14:05:06.317872 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.317883 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:06.317892 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:06.317962 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:06.365531 1971155 cri.go:89] found id: ""
	I0120 14:05:06.365574 1971155 logs.go:282] 0 containers: []
	W0120 14:05:06.365587 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:06.365601 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:06.365626 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:06.460369 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:06.460403 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:06.460422 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:06.541919 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:06.541967 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:06.588755 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:06.588805 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:06.648087 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:06.648140 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:08.026139 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:10.026227 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:09.166758 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:09.184071 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:09.184193 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:09.222998 1971155 cri.go:89] found id: ""
	I0120 14:05:09.223035 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.223048 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:09.223056 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:09.223140 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:09.275875 1971155 cri.go:89] found id: ""
	I0120 14:05:09.275912 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.275926 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:09.275934 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:09.276006 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:09.320157 1971155 cri.go:89] found id: ""
	I0120 14:05:09.320192 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.320210 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:09.320218 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:09.320309 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:09.366463 1971155 cri.go:89] found id: ""
	I0120 14:05:09.366496 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.366505 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:09.366511 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:09.366582 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:09.414645 1971155 cri.go:89] found id: ""
	I0120 14:05:09.414675 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.414683 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:09.414689 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:09.414758 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:09.474004 1971155 cri.go:89] found id: ""
	I0120 14:05:09.474047 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.474059 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:09.474068 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:09.474153 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:09.536187 1971155 cri.go:89] found id: ""
	I0120 14:05:09.536217 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.536224 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:09.536230 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:09.536289 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:09.574100 1971155 cri.go:89] found id: ""
	I0120 14:05:09.574134 1971155 logs.go:282] 0 containers: []
	W0120 14:05:09.574142 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:09.574154 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:09.574167 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:09.620881 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:09.620923 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:09.676117 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:09.676177 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:09.692431 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:09.692473 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:09.768800 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:09.768831 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:09.768851 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:12.350771 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:12.365286 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:12.365374 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:12.402924 1971155 cri.go:89] found id: ""
	I0120 14:05:12.402966 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.402978 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:12.402998 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:12.403073 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:12.027431 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:14.526570 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:12.442108 1971155 cri.go:89] found id: ""
	I0120 14:05:12.442138 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.442147 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:12.442154 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:12.442211 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:12.484002 1971155 cri.go:89] found id: ""
	I0120 14:05:12.484058 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.484071 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:12.484078 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:12.484149 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:12.524060 1971155 cri.go:89] found id: ""
	I0120 14:05:12.524097 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.524109 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:12.524118 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:12.524201 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:12.563120 1971155 cri.go:89] found id: ""
	I0120 14:05:12.563147 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.563156 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:12.563163 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:12.563232 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:12.604782 1971155 cri.go:89] found id: ""
	I0120 14:05:12.604824 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.604838 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:12.604847 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:12.604925 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:12.642278 1971155 cri.go:89] found id: ""
	I0120 14:05:12.642305 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.642316 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:12.642326 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:12.642391 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:12.682274 1971155 cri.go:89] found id: ""
	I0120 14:05:12.682311 1971155 logs.go:282] 0 containers: []
	W0120 14:05:12.682323 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:12.682337 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:12.682353 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:12.773735 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:12.773785 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:12.825008 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:12.825049 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:12.873999 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:12.874042 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:12.888767 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:12.888804 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:12.965739 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:15.466957 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:15.493756 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:15.493839 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:15.538680 1971155 cri.go:89] found id: ""
	I0120 14:05:15.538709 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.538717 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:15.538724 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:15.538783 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:15.583029 1971155 cri.go:89] found id: ""
	I0120 14:05:15.583069 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.583081 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:15.583089 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:15.583174 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:15.623762 1971155 cri.go:89] found id: ""
	I0120 14:05:15.623801 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.623815 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:15.623825 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:15.623903 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:15.663883 1971155 cri.go:89] found id: ""
	I0120 14:05:15.663921 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.663930 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:15.663938 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:15.664013 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:15.701723 1971155 cri.go:89] found id: ""
	I0120 14:05:15.701758 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.701769 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:15.701778 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:15.701847 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:15.741612 1971155 cri.go:89] found id: ""
	I0120 14:05:15.741649 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.741661 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:15.741670 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:15.741736 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:15.783225 1971155 cri.go:89] found id: ""
	I0120 14:05:15.783257 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.783267 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:15.783275 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:15.783353 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:15.823664 1971155 cri.go:89] found id: ""
	I0120 14:05:15.823699 1971155 logs.go:282] 0 containers: []
	W0120 14:05:15.823713 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:15.823725 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:15.823740 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:15.876890 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:15.876936 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:15.892034 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:15.892077 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:15.967939 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:15.967966 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:15.967982 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:16.049913 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:16.049961 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:16.527187 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:19.028271 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:18.599849 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:18.613686 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:18.613756 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:18.656070 1971155 cri.go:89] found id: ""
	I0120 14:05:18.656104 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.656113 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:18.656120 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:18.656184 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:18.694391 1971155 cri.go:89] found id: ""
	I0120 14:05:18.694420 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.694429 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:18.694435 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:18.694499 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:18.733057 1971155 cri.go:89] found id: ""
	I0120 14:05:18.733094 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.733107 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:18.733114 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:18.733187 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:18.770955 1971155 cri.go:89] found id: ""
	I0120 14:05:18.770985 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.770993 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:18.770998 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:18.771065 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:18.805878 1971155 cri.go:89] found id: ""
	I0120 14:05:18.805912 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.805924 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:18.805932 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:18.806015 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:18.843859 1971155 cri.go:89] found id: ""
	I0120 14:05:18.843891 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.843904 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:18.843912 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:18.843981 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:18.882554 1971155 cri.go:89] found id: ""
	I0120 14:05:18.882585 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.882594 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:18.882622 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:18.882686 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:18.919206 1971155 cri.go:89] found id: ""
	I0120 14:05:18.919242 1971155 logs.go:282] 0 containers: []
	W0120 14:05:18.919258 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:18.919269 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:18.919284 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:18.969428 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:18.969476 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:18.984666 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:18.984702 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:19.060472 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:19.060502 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:19.060517 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:19.136205 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:19.136248 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:21.681437 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:21.695755 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:21.695840 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:21.732554 1971155 cri.go:89] found id: ""
	I0120 14:05:21.732587 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.732599 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:21.732609 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:21.732680 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:21.771047 1971155 cri.go:89] found id: ""
	I0120 14:05:21.771078 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.771087 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:21.771093 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:21.771149 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:21.806053 1971155 cri.go:89] found id: ""
	I0120 14:05:21.806084 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.806096 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:21.806104 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:21.806176 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:21.843647 1971155 cri.go:89] found id: ""
	I0120 14:05:21.843679 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.843692 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:21.843699 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:21.843767 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:21.878399 1971155 cri.go:89] found id: ""
	I0120 14:05:21.878437 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.878449 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:21.878458 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:21.878531 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:21.912712 1971155 cri.go:89] found id: ""
	I0120 14:05:21.912746 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.912757 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:21.912770 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:21.912842 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:21.948182 1971155 cri.go:89] found id: ""
	I0120 14:05:21.948214 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.948225 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:21.948241 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:21.948311 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:21.987907 1971155 cri.go:89] found id: ""
	I0120 14:05:21.987945 1971155 logs.go:282] 0 containers: []
	W0120 14:05:21.987954 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:21.987964 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:21.987977 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:22.037198 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:22.037244 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:22.053238 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:22.053293 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:22.125680 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:22.125703 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:22.125721 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:22.208323 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:22.208371 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:21.529531 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:24.025073 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:24.752796 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:24.769865 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:24.769967 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:24.809247 1971155 cri.go:89] found id: ""
	I0120 14:05:24.809282 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.809293 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:24.809305 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:24.809378 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:24.849761 1971155 cri.go:89] found id: ""
	I0120 14:05:24.849788 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.849797 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:24.849803 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:24.849867 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:24.892195 1971155 cri.go:89] found id: ""
	I0120 14:05:24.892226 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.892239 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:24.892249 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:24.892315 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:24.935367 1971155 cri.go:89] found id: ""
	I0120 14:05:24.935400 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.935412 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:24.935420 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:24.935488 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:24.980132 1971155 cri.go:89] found id: ""
	I0120 14:05:24.980164 1971155 logs.go:282] 0 containers: []
	W0120 14:05:24.980179 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:24.980188 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:24.980269 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:25.017365 1971155 cri.go:89] found id: ""
	I0120 14:05:25.017394 1971155 logs.go:282] 0 containers: []
	W0120 14:05:25.017405 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:25.017413 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:25.017487 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:25.059078 1971155 cri.go:89] found id: ""
	I0120 14:05:25.059115 1971155 logs.go:282] 0 containers: []
	W0120 14:05:25.059127 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:25.059163 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:25.059276 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:25.099507 1971155 cri.go:89] found id: ""
	I0120 14:05:25.099545 1971155 logs.go:282] 0 containers: []
	W0120 14:05:25.099557 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:25.099571 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:25.099588 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:25.174356 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:25.174385 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:25.174412 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:25.260260 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:25.260303 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:25.304309 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:25.304342 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:25.358340 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:25.358388 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:24.178761 1970602 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.906332562s)
	I0120 14:05:24.178859 1970602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:05:24.194902 1970602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:05:24.206080 1970602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:05:24.217371 1970602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:05:24.217398 1970602 kubeadm.go:157] found existing configuration files:
	
	I0120 14:05:24.217448 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:05:24.227549 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:05:24.227627 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:05:24.238584 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:05:24.249016 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:05:24.249171 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:05:24.260537 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:05:24.270728 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:05:24.270792 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:05:24.281345 1970602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:05:24.291266 1970602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:05:24.291344 1970602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:05:24.302258 1970602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:05:24.477322 1970602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:05:26.026356 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:28.027425 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:30.525634 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:27.876603 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:27.892994 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:27.893071 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:27.931991 1971155 cri.go:89] found id: ""
	I0120 14:05:27.932048 1971155 logs.go:282] 0 containers: []
	W0120 14:05:27.932060 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:27.932068 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:27.932139 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:27.968882 1971155 cri.go:89] found id: ""
	I0120 14:05:27.968917 1971155 logs.go:282] 0 containers: []
	W0120 14:05:27.968926 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:27.968933 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:27.968998 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:28.009604 1971155 cri.go:89] found id: ""
	I0120 14:05:28.009635 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.009644 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:28.009650 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:28.009708 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:28.050036 1971155 cri.go:89] found id: ""
	I0120 14:05:28.050069 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.050080 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:28.050087 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:28.050156 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:28.092348 1971155 cri.go:89] found id: ""
	I0120 14:05:28.092392 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.092427 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:28.092436 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:28.092512 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:28.133751 1971155 cri.go:89] found id: ""
	I0120 14:05:28.133787 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.133796 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:28.133804 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:28.133875 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:28.177231 1971155 cri.go:89] found id: ""
	I0120 14:05:28.177268 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.177280 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:28.177288 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:28.177382 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:28.217125 1971155 cri.go:89] found id: ""
	I0120 14:05:28.217160 1971155 logs.go:282] 0 containers: []
	W0120 14:05:28.217175 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:28.217189 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:28.217207 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:28.305446 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:28.305480 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:28.305498 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:28.389940 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:28.389996 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:28.445472 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:28.445519 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:28.503281 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:28.503343 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:31.023457 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:31.039576 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:31.039665 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:31.090049 1971155 cri.go:89] found id: ""
	I0120 14:05:31.090086 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.090099 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:31.090108 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:31.090199 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:31.129134 1971155 cri.go:89] found id: ""
	I0120 14:05:31.129168 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.129180 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:31.129189 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:31.129246 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:31.169790 1971155 cri.go:89] found id: ""
	I0120 14:05:31.169822 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.169834 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:31.169845 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:31.169940 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:31.210981 1971155 cri.go:89] found id: ""
	I0120 14:05:31.211017 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.211030 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:31.211039 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:31.211126 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:31.254051 1971155 cri.go:89] found id: ""
	I0120 14:05:31.254081 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.254089 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:31.254096 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:31.254175 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:31.301717 1971155 cri.go:89] found id: ""
	I0120 14:05:31.301750 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.301772 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:31.301782 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:31.301847 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:31.343204 1971155 cri.go:89] found id: ""
	I0120 14:05:31.343233 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.343242 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:31.343248 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:31.343304 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:31.382466 1971155 cri.go:89] found id: ""
	I0120 14:05:31.382501 1971155 logs.go:282] 0 containers: []
	W0120 14:05:31.382512 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:31.382525 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:31.382544 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:31.461732 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:31.461781 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:31.461801 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:31.559483 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:31.559566 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:31.606795 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:31.606833 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:31.661423 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:31.661468 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:33.376770 1970602 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 14:05:33.376853 1970602 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:05:33.376989 1970602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:05:33.377149 1970602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:05:33.377293 1970602 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 14:05:33.377400 1970602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:05:33.378924 1970602 out.go:235]   - Generating certificates and keys ...
	I0120 14:05:33.379025 1970602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:05:33.379104 1970602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:05:33.379208 1970602 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:05:33.379307 1970602 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:05:33.379417 1970602 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:05:33.379524 1970602 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:05:33.379607 1970602 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:05:33.379717 1970602 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:05:33.379839 1970602 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:05:33.379966 1970602 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:05:33.380043 1970602 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:05:33.380129 1970602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:05:33.380198 1970602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:05:33.380268 1970602 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 14:05:33.380343 1970602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:05:33.380413 1970602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:05:33.380471 1970602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:05:33.380560 1970602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:05:33.380637 1970602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:05:33.382317 1970602 out.go:235]   - Booting up control plane ...
	I0120 14:05:33.382425 1970602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:05:33.382512 1970602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:05:33.382596 1970602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:05:33.382747 1970602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:05:33.382857 1970602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:05:33.382912 1970602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:05:33.383102 1970602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 14:05:33.383280 1970602 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 14:05:33.383370 1970602 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.354939ms
	I0120 14:05:33.383469 1970602 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 14:05:33.383558 1970602 kubeadm.go:310] [api-check] The API server is healthy after 5.504896351s
	I0120 14:05:33.383728 1970602 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 14:05:33.383925 1970602 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 14:05:33.384013 1970602 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 14:05:33.384335 1970602 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-647109 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 14:05:33.384423 1970602 kubeadm.go:310] [bootstrap-token] Using token: lua4mv.z68od0ysi19pmefo
	I0120 14:05:33.386221 1970602 out.go:235]   - Configuring RBAC rules ...
	I0120 14:05:33.386365 1970602 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 14:05:33.386446 1970602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 14:05:33.386593 1970602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 14:05:33.386761 1970602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 14:05:33.386926 1970602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 14:05:33.387058 1970602 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 14:05:33.387208 1970602 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 14:05:33.387276 1970602 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 14:05:33.387343 1970602 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 14:05:33.387355 1970602 kubeadm.go:310] 
	I0120 14:05:33.387441 1970602 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 14:05:33.387450 1970602 kubeadm.go:310] 
	I0120 14:05:33.387576 1970602 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 14:05:33.387589 1970602 kubeadm.go:310] 
	I0120 14:05:33.387627 1970602 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 14:05:33.387678 1970602 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 14:05:33.387738 1970602 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 14:05:33.387748 1970602 kubeadm.go:310] 
	I0120 14:05:33.387843 1970602 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 14:05:33.387853 1970602 kubeadm.go:310] 
	I0120 14:05:33.387930 1970602 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 14:05:33.387939 1970602 kubeadm.go:310] 
	I0120 14:05:33.388012 1970602 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 14:05:33.388091 1970602 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 14:05:33.388156 1970602 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 14:05:33.388160 1970602 kubeadm.go:310] 
	I0120 14:05:33.388249 1970602 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 14:05:33.388325 1970602 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 14:05:33.388332 1970602 kubeadm.go:310] 
	I0120 14:05:33.388404 1970602 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lua4mv.z68od0ysi19pmefo \
	I0120 14:05:33.388491 1970602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 \
	I0120 14:05:33.388524 1970602 kubeadm.go:310] 	--control-plane 
	I0120 14:05:33.388531 1970602 kubeadm.go:310] 
	I0120 14:05:33.388617 1970602 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 14:05:33.388625 1970602 kubeadm.go:310] 
	I0120 14:05:33.388736 1970602 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lua4mv.z68od0ysi19pmefo \
	I0120 14:05:33.388834 1970602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 
	I0120 14:05:33.388846 1970602 cni.go:84] Creating CNI manager for ""
	I0120 14:05:33.388853 1970602 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:05:33.390876 1970602 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:05:33.392513 1970602 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:05:33.407354 1970602 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:05:33.428824 1970602 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:05:33.428934 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:33.428977 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-647109 minikube.k8s.io/updated_at=2025_01_20T14_05_33_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=embed-certs-647109 minikube.k8s.io/primary=true
	I0120 14:05:33.473138 1970602 ops.go:34] apiserver oom_adj: -16
	I0120 14:05:33.718712 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:32.526764 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:35.026819 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:34.218762 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:34.719381 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:35.219746 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:35.718888 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:36.218775 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:36.718813 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:37.219353 1970602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:05:37.393979 1970602 kubeadm.go:1113] duration metric: took 3.965125255s to wait for elevateKubeSystemPrivileges
	I0120 14:05:37.394019 1970602 kubeadm.go:394] duration metric: took 5m3.132880668s to StartCluster
	I0120 14:05:37.394048 1970602 settings.go:142] acquiring lock: {Name:mka51395421b81550a6a78e164d8aa21a9ede347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:05:37.394150 1970602 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:05:37.396378 1970602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:05:37.396706 1970602 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.62 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:05:37.396823 1970602 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:05:37.396933 1970602 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-647109"
	I0120 14:05:37.396953 1970602 config.go:182] Loaded profile config "embed-certs-647109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:05:37.396970 1970602 addons.go:69] Setting metrics-server=true in profile "embed-certs-647109"
	I0120 14:05:37.396980 1970602 addons.go:238] Setting addon metrics-server=true in "embed-certs-647109"
	W0120 14:05:37.396988 1970602 addons.go:247] addon metrics-server should already be in state true
	I0120 14:05:37.396987 1970602 addons.go:69] Setting default-storageclass=true in profile "embed-certs-647109"
	I0120 14:05:37.396953 1970602 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-647109"
	I0120 14:05:37.397011 1970602 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-647109"
	W0120 14:05:37.397012 1970602 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:05:37.397041 1970602 host.go:66] Checking if "embed-certs-647109" exists ...
	I0120 14:05:37.397044 1970602 host.go:66] Checking if "embed-certs-647109" exists ...
	I0120 14:05:37.397479 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.397483 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.397495 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.397519 1970602 addons.go:69] Setting dashboard=true in profile "embed-certs-647109"
	I0120 14:05:37.397526 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.397532 1970602 addons.go:238] Setting addon dashboard=true in "embed-certs-647109"
	W0120 14:05:37.397539 1970602 addons.go:247] addon dashboard should already be in state true
	I0120 14:05:37.397563 1970602 host.go:66] Checking if "embed-certs-647109" exists ...
	I0120 14:05:37.397606 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.397785 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.397855 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.397900 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.401795 1970602 out.go:177] * Verifying Kubernetes components...
	I0120 14:05:34.179481 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:34.195424 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:34.195496 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:34.236592 1971155 cri.go:89] found id: ""
	I0120 14:05:34.236623 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.236632 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:34.236639 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:34.236696 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:34.275803 1971155 cri.go:89] found id: ""
	I0120 14:05:34.275836 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.275848 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:34.275855 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:34.275944 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:34.315900 1971155 cri.go:89] found id: ""
	I0120 14:05:34.315932 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.315944 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:34.315952 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:34.316019 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:34.353614 1971155 cri.go:89] found id: ""
	I0120 14:05:34.353646 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.353655 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:34.353661 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:34.353735 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:34.395635 1971155 cri.go:89] found id: ""
	I0120 14:05:34.395673 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.395685 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:34.395698 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:34.395782 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:34.435631 1971155 cri.go:89] found id: ""
	I0120 14:05:34.435662 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.435672 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:34.435678 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:34.435742 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:34.474904 1971155 cri.go:89] found id: ""
	I0120 14:05:34.474940 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.474952 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:34.474960 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:34.475030 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:34.513643 1971155 cri.go:89] found id: ""
	I0120 14:05:34.513675 1971155 logs.go:282] 0 containers: []
	W0120 14:05:34.513688 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:34.513701 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:34.513719 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:34.531525 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:34.531559 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:34.614600 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:34.614649 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:34.614667 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:34.691236 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:34.691282 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:34.739567 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:34.739616 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:37.294798 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:37.313219 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:37.313309 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:37.360355 1971155 cri.go:89] found id: ""
	I0120 14:05:37.360392 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.360406 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:37.360415 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:37.360493 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:37.400427 1971155 cri.go:89] found id: ""
	I0120 14:05:37.400456 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.400466 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:37.400475 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:37.400535 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:37.403396 1970602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:05:37.419631 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0120 14:05:37.419631 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40515
	I0120 14:05:37.419751 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0120 14:05:37.420159 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.420340 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.420726 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.420753 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.420870 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.420883 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.421153 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.421286 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.421765 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.421807 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.421859 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.421907 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.423180 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.424356 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43445
	I0120 14:05:37.424853 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.427176 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.427218 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.431273 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.431306 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.431273 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.431590 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.431772 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.432414 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.432463 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.438218 1970602 addons.go:238] Setting addon default-storageclass=true in "embed-certs-647109"
	W0120 14:05:37.438363 1970602 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:05:37.438408 1970602 host.go:66] Checking if "embed-certs-647109" exists ...
	I0120 14:05:37.438859 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.439701 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.444146 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I0120 14:05:37.444576 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41243
	I0120 14:05:37.444773 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.444915 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.445334 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.445367 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.445548 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.445565 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.445846 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.445940 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.446010 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.446155 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.448263 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:05:37.448850 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:05:37.451121 1970602 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:05:37.451145 1970602 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:05:37.452901 1970602 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:05:37.452925 1970602 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:05:37.452946 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:05:37.453029 1970602 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:05:37.453046 1970602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:05:37.453066 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:05:37.457009 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.457306 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:05:37.457323 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.457535 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:05:37.457644 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.457758 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:05:37.457905 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:05:37.458015 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:05:37.458314 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:05:37.458329 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.458460 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:05:37.458637 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:05:37.458741 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:05:37.458835 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:05:37.465409 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44303
	I0120 14:05:37.466031 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.466695 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.466719 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.466964 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41927
	I0120 14:05:37.467498 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.467603 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.468062 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.468085 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.468561 1970602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:05:37.468603 1970602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:05:37.469079 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.469289 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.471308 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:05:37.473344 1970602 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:05:37.475133 1970602 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:05:37.476628 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:05:37.476660 1970602 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:05:37.476691 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:05:37.480284 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.480952 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:05:37.480993 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.481641 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:05:37.481944 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:05:37.482177 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:05:37.482403 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:05:37.509821 1970602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44375
	I0120 14:05:37.510356 1970602 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:05:37.511017 1970602 main.go:141] libmachine: Using API Version  1
	I0120 14:05:37.511041 1970602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:05:37.511533 1970602 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:05:37.511923 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetState
	I0120 14:05:37.514239 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .DriverName
	I0120 14:05:37.514505 1970602 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:05:37.514525 1970602 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:05:37.514547 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHHostname
	I0120 14:05:37.518318 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.518891 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:ac:09", ip: ""} in network mk-embed-certs-647109: {Iface:virbr2 ExpiryTime:2025-01-20 15:00:16 +0000 UTC Type:0 Mac:52:54:00:31:ac:09 Iaid: IPaddr:192.168.50.62 Prefix:24 Hostname:embed-certs-647109 Clientid:01:52:54:00:31:ac:09}
	I0120 14:05:37.518919 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | domain embed-certs-647109 has defined IP address 192.168.50.62 and MAC address 52:54:00:31:ac:09 in network mk-embed-certs-647109
	I0120 14:05:37.519100 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHPort
	I0120 14:05:37.519331 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHKeyPath
	I0120 14:05:37.519489 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .GetSSHUsername
	I0120 14:05:37.519722 1970602 sshutil.go:53] new ssh client: &{IP:192.168.50.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/embed-certs-647109/id_rsa Username:docker}
	I0120 14:05:37.741139 1970602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:05:37.799051 1970602 node_ready.go:35] waiting up to 6m0s for node "embed-certs-647109" to be "Ready" ...
	I0120 14:05:37.809096 1970602 node_ready.go:49] node "embed-certs-647109" has status "Ready":"True"
	I0120 14:05:37.809130 1970602 node_ready.go:38] duration metric: took 10.033158ms for node "embed-certs-647109" to be "Ready" ...
	I0120 14:05:37.809146 1970602 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:37.819590 1970602 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:37.940986 1970602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:05:37.994181 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:05:37.994215 1970602 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:05:38.057795 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:05:38.057828 1970602 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:05:38.074299 1970602 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:05:38.074328 1970602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:05:38.076399 1970602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:05:38.161099 1970602 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:05:38.161133 1970602 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:05:38.172032 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:05:38.172066 1970602 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:05:38.251253 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:05:38.251287 1970602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:05:38.267793 1970602 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:05:38.267823 1970602 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:05:38.300776 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:05:38.300806 1970602 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 14:05:38.438115 1970602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:05:38.438263 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:05:38.438293 1970602 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:05:38.469992 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:05:38.470026 1970602 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:05:38.488178 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:38.488209 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:38.488602 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:38.488624 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:38.488633 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:38.488642 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:38.488642 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:38.488915 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:38.488928 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:38.506460 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:38.506490 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:38.506908 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:38.506932 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:38.506952 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:38.535768 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:05:38.535801 1970602 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 14:05:38.588204 1970602 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:05:38.588244 1970602 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 14:05:38.641430 1970602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:05:37.532230 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:40.026877 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:39.322794 1970602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.24634872s)
	I0120 14:05:39.322872 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:39.322888 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:39.323266 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:39.323312 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:39.323332 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:39.323342 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:39.323351 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:39.323616 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:39.323623 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:39.323633 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:39.850519 1970602 pod_ready.go:103] pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:40.002690 1970602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.564518983s)
	I0120 14:05:40.002772 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:40.002791 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:40.003274 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:40.003336 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:40.003360 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:40.003372 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:40.003382 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:40.003762 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:40.003779 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:40.003791 1970602 addons.go:479] Verifying addon metrics-server=true in "embed-certs-647109"
	I0120 14:05:40.003823 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:40.923510 1970602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.282025528s)
	I0120 14:05:40.923577 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:40.923608 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:40.923936 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:40.923983 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:40.924000 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:40.924023 1970602 main.go:141] libmachine: Making call to close driver server
	I0120 14:05:40.924034 1970602 main.go:141] libmachine: (embed-certs-647109) Calling .Close
	I0120 14:05:40.924348 1970602 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:05:40.924369 1970602 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:05:40.924375 1970602 main.go:141] libmachine: (embed-certs-647109) DBG | Closing plugin on server side
	I0120 14:05:40.926492 1970602 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-647109 addons enable metrics-server
	
	I0120 14:05:40.928141 1970602 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 14:05:37.472778 1971155 cri.go:89] found id: ""
	I0120 14:05:37.472800 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.472807 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:37.472814 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:37.472861 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:37.514813 1971155 cri.go:89] found id: ""
	I0120 14:05:37.514836 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.514846 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:37.514853 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:37.514912 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:37.559689 1971155 cri.go:89] found id: ""
	I0120 14:05:37.559724 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.559735 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:37.559768 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:37.559851 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:37.604249 1971155 cri.go:89] found id: ""
	I0120 14:05:37.604279 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.604291 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:37.604299 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:37.604372 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:37.655652 1971155 cri.go:89] found id: ""
	I0120 14:05:37.655689 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.655702 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:37.655710 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:37.655780 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:37.699626 1971155 cri.go:89] found id: ""
	I0120 14:05:37.699663 1971155 logs.go:282] 0 containers: []
	W0120 14:05:37.699677 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:37.699690 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:37.699706 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:37.761041 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:37.761105 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:37.789894 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:37.789933 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:37.870389 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:37.870424 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:37.870444 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:37.953788 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:37.953828 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:40.507832 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:40.526389 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:40.526479 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:40.564969 1971155 cri.go:89] found id: ""
	I0120 14:05:40.565007 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.565019 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:40.565028 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:40.565102 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:40.610815 1971155 cri.go:89] found id: ""
	I0120 14:05:40.610851 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.610863 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:40.610879 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:40.610950 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:40.656202 1971155 cri.go:89] found id: ""
	I0120 14:05:40.656241 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.656253 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:40.656261 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:40.656332 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:40.696520 1971155 cri.go:89] found id: ""
	I0120 14:05:40.696555 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.696567 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:40.696576 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:40.696655 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:40.741177 1971155 cri.go:89] found id: ""
	I0120 14:05:40.741213 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.741224 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:40.741232 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:40.741321 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:40.787423 1971155 cri.go:89] found id: ""
	I0120 14:05:40.787463 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.787476 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:40.787486 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:40.787560 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:40.838180 1971155 cri.go:89] found id: ""
	I0120 14:05:40.838217 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.838227 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:40.838235 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:40.838308 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:40.877888 1971155 cri.go:89] found id: ""
	I0120 14:05:40.877922 1971155 logs.go:282] 0 containers: []
	W0120 14:05:40.877934 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:40.877947 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:40.877962 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:40.942664 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:40.942718 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:40.960105 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:40.960147 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:41.038583 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:41.038640 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:41.038660 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:41.125430 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:41.125499 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:40.930035 1970602 addons.go:514] duration metric: took 3.533222189s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 14:05:42.330147 1970602 pod_ready.go:103] pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:43.342012 1970602 pod_ready.go:93] pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.342038 1970602 pod_ready.go:82] duration metric: took 5.522419293s for pod "coredns-668d6bf9bc-ndbzp" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.342050 1970602 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-ndv97" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.359479 1970602 pod_ready.go:93] pod "coredns-668d6bf9bc-ndv97" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.359506 1970602 pod_ready.go:82] duration metric: took 17.448444ms for pod "coredns-668d6bf9bc-ndv97" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.359518 1970602 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.403702 1970602 pod_ready.go:93] pod "etcd-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.403732 1970602 pod_ready.go:82] duration metric: took 44.20711ms for pod "etcd-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.403744 1970602 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.413596 1970602 pod_ready.go:93] pod "kube-apiserver-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.413623 1970602 pod_ready.go:82] duration metric: took 9.873022ms for pod "kube-apiserver-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.413634 1970602 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.421693 1970602 pod_ready.go:93] pod "kube-controller-manager-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.421718 1970602 pod_ready.go:82] duration metric: took 8.077458ms for pod "kube-controller-manager-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.421731 1970602 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-chhpt" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.724510 1970602 pod_ready.go:93] pod "kube-proxy-chhpt" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:43.724537 1970602 pod_ready.go:82] duration metric: took 302.799519ms for pod "kube-proxy-chhpt" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:43.724549 1970602 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:42.527349 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:45.026552 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:43.677350 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:43.695745 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:43.695838 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:43.746662 1971155 cri.go:89] found id: ""
	I0120 14:05:43.746695 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.746710 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:43.746718 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:43.746791 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:43.802111 1971155 cri.go:89] found id: ""
	I0120 14:05:43.802142 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.802154 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:43.802163 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:43.802234 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:43.849314 1971155 cri.go:89] found id: ""
	I0120 14:05:43.849351 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.849363 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:43.849371 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:43.849444 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:43.898242 1971155 cri.go:89] found id: ""
	I0120 14:05:43.898279 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.898293 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:43.898302 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:43.898384 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:43.939248 1971155 cri.go:89] found id: ""
	I0120 14:05:43.939282 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.939293 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:43.939302 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:43.939369 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:43.979271 1971155 cri.go:89] found id: ""
	I0120 14:05:43.979307 1971155 logs.go:282] 0 containers: []
	W0120 14:05:43.979327 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:43.979336 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:43.979408 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:44.016351 1971155 cri.go:89] found id: ""
	I0120 14:05:44.016387 1971155 logs.go:282] 0 containers: []
	W0120 14:05:44.016400 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:44.016409 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:44.016479 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:44.060965 1971155 cri.go:89] found id: ""
	I0120 14:05:44.061005 1971155 logs.go:282] 0 containers: []
	W0120 14:05:44.061017 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:44.061032 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:44.061050 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:44.076017 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:44.076070 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:44.159732 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:44.159761 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:44.159775 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:44.240721 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:44.240769 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:44.285018 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:44.285061 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:46.839125 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:46.856748 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:46.856841 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:46.908851 1971155 cri.go:89] found id: ""
	I0120 14:05:46.908886 1971155 logs.go:282] 0 containers: []
	W0120 14:05:46.908898 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:46.908909 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:46.908978 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:46.949810 1971155 cri.go:89] found id: ""
	I0120 14:05:46.949865 1971155 logs.go:282] 0 containers: []
	W0120 14:05:46.949879 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:46.949887 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:46.949969 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:46.995158 1971155 cri.go:89] found id: ""
	I0120 14:05:46.995191 1971155 logs.go:282] 0 containers: []
	W0120 14:05:46.995201 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:46.995212 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:46.995284 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:47.034872 1971155 cri.go:89] found id: ""
	I0120 14:05:47.034905 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.034916 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:47.034924 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:47.034993 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:47.077500 1971155 cri.go:89] found id: ""
	I0120 14:05:47.077529 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.077537 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:47.077544 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:47.077608 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:47.118996 1971155 cri.go:89] found id: ""
	I0120 14:05:47.119027 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.119048 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:47.119059 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:47.119126 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:47.159902 1971155 cri.go:89] found id: ""
	I0120 14:05:47.159931 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.159943 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:47.159952 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:47.160027 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:47.201895 1971155 cri.go:89] found id: ""
	I0120 14:05:47.201922 1971155 logs.go:282] 0 containers: []
	W0120 14:05:47.201930 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:47.201942 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:47.201957 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:47.244852 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:47.244888 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:47.297439 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:47.297486 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:47.313519 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:47.313558 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:47.389340 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:47.389372 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:47.389391 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:45.324683 1970602 pod_ready.go:93] pod "kube-scheduler-embed-certs-647109" in "kube-system" namespace has status "Ready":"True"
	I0120 14:05:45.324712 1970602 pod_ready.go:82] duration metric: took 1.600155124s for pod "kube-scheduler-embed-certs-647109" in "kube-system" namespace to be "Ready" ...
	I0120 14:05:45.324723 1970602 pod_ready.go:39] duration metric: took 7.515564286s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:05:45.324743 1970602 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:05:45.324813 1970602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:45.381331 1970602 api_server.go:72] duration metric: took 7.98457351s to wait for apiserver process to appear ...
	I0120 14:05:45.381368 1970602 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:05:45.381388 1970602 api_server.go:253] Checking apiserver healthz at https://192.168.50.62:8443/healthz ...
	I0120 14:05:45.386523 1970602 api_server.go:279] https://192.168.50.62:8443/healthz returned 200:
	ok
	I0120 14:05:45.387477 1970602 api_server.go:141] control plane version: v1.32.0
	I0120 14:05:45.387504 1970602 api_server.go:131] duration metric: took 6.127764ms to wait for apiserver health ...
	I0120 14:05:45.387513 1970602 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:05:45.530457 1970602 system_pods.go:59] 9 kube-system pods found
	I0120 14:05:45.530502 1970602 system_pods.go:61] "coredns-668d6bf9bc-ndbzp" [d43c588e-6fc1-435b-9c9a-8b19201596ae] Running
	I0120 14:05:45.530510 1970602 system_pods.go:61] "coredns-668d6bf9bc-ndv97" [3298cf5d-5983-463b-8aca-792fa1d94241] Running
	I0120 14:05:45.530516 1970602 system_pods.go:61] "etcd-embed-certs-647109" [58f40005-bda9-4a38-8e2a-8e3f4a869c20] Running
	I0120 14:05:45.530521 1970602 system_pods.go:61] "kube-apiserver-embed-certs-647109" [8e188c16-1d56-4972-baf1-20d8dd10f440] Running
	I0120 14:05:45.530527 1970602 system_pods.go:61] "kube-controller-manager-embed-certs-647109" [691924af-9adb-4788-9104-0dcca6ee95f3] Running
	I0120 14:05:45.530532 1970602 system_pods.go:61] "kube-proxy-chhpt" [a0244020-668f-4700-85c2-9562f4d0c920] Running
	I0120 14:05:45.530537 1970602 system_pods.go:61] "kube-scheduler-embed-certs-647109" [6b42ab84-e4cb-4dc8-a4ad-e7da476ec3b2] Running
	I0120 14:05:45.530548 1970602 system_pods.go:61] "metrics-server-f79f97bbb-nqwxp" [68d39045-4c01-40a2-9e8f-0f7734838f0b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:05:45.530559 1970602 system_pods.go:61] "storage-provisioner" [8067c033-4ef4-4945-95b5-f4120df75f5c] Running
	I0120 14:05:45.530574 1970602 system_pods.go:74] duration metric: took 143.054434ms to wait for pod list to return data ...
	I0120 14:05:45.530587 1970602 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:05:45.727314 1970602 default_sa.go:45] found service account: "default"
	I0120 14:05:45.727359 1970602 default_sa.go:55] duration metric: took 196.759471ms for default service account to be created ...
	I0120 14:05:45.727373 1970602 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:05:45.927406 1970602 system_pods.go:87] 9 kube-system pods found
	I0120 14:05:47.027640 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:49.526205 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:49.969003 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:49.983821 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:49.983904 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:50.024496 1971155 cri.go:89] found id: ""
	I0120 14:05:50.024525 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.024536 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:50.024545 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:50.024611 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:50.066376 1971155 cri.go:89] found id: ""
	I0120 14:05:50.066408 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.066416 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:50.066423 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:50.066497 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:50.106918 1971155 cri.go:89] found id: ""
	I0120 14:05:50.107034 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.107055 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:50.107065 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:50.107154 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:50.154846 1971155 cri.go:89] found id: ""
	I0120 14:05:50.154940 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.154962 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:50.154981 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:50.155095 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:50.228177 1971155 cri.go:89] found id: ""
	I0120 14:05:50.228218 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.228238 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:50.228249 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:50.228334 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:50.294106 1971155 cri.go:89] found id: ""
	I0120 14:05:50.294145 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.294158 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:50.294167 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:50.294242 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:50.340312 1971155 cri.go:89] found id: ""
	I0120 14:05:50.340357 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.340368 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:50.340375 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:50.340450 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:50.384031 1971155 cri.go:89] found id: ""
	I0120 14:05:50.384070 1971155 logs.go:282] 0 containers: []
	W0120 14:05:50.384082 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:50.384095 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:50.384112 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:50.399361 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:50.399396 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:50.484820 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:50.484851 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:50.484868 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:50.594107 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:50.594171 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:50.647700 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:50.647740 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:51.527819 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:54.026000 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:53.213104 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:53.229463 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:53.229538 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:53.270860 1971155 cri.go:89] found id: ""
	I0120 14:05:53.270896 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.270909 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:53.270917 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:53.270977 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:53.311721 1971155 cri.go:89] found id: ""
	I0120 14:05:53.311748 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.311757 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:53.311764 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:53.311818 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:53.350019 1971155 cri.go:89] found id: ""
	I0120 14:05:53.350053 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.350064 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:53.350073 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:53.350144 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:53.386955 1971155 cri.go:89] found id: ""
	I0120 14:05:53.386982 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.386990 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:53.386996 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:53.387059 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:53.427056 1971155 cri.go:89] found id: ""
	I0120 14:05:53.427096 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.427105 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:53.427112 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:53.427170 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:53.468506 1971155 cri.go:89] found id: ""
	I0120 14:05:53.468546 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.468559 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:53.468568 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:53.468642 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:53.505884 1971155 cri.go:89] found id: ""
	I0120 14:05:53.505926 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.505938 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:53.505948 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:53.506024 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:53.547189 1971155 cri.go:89] found id: ""
	I0120 14:05:53.547232 1971155 logs.go:282] 0 containers: []
	W0120 14:05:53.547244 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:53.547258 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:53.547282 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:53.629525 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:53.629559 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:53.629577 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:53.711943 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:53.711994 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:53.761408 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:53.761442 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:53.815735 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:53.815781 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:56.332189 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:56.347525 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:05:56.347622 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:05:56.389104 1971155 cri.go:89] found id: ""
	I0120 14:05:56.389145 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.389156 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:05:56.389165 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:05:56.389244 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:05:56.427108 1971155 cri.go:89] found id: ""
	I0120 14:05:56.427151 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.427163 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:05:56.427173 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:05:56.427252 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:05:56.473424 1971155 cri.go:89] found id: ""
	I0120 14:05:56.473457 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.473469 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:05:56.473477 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:05:56.473560 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:05:56.513450 1971155 cri.go:89] found id: ""
	I0120 14:05:56.513485 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.513495 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:05:56.513504 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:05:56.513578 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:05:56.562482 1971155 cri.go:89] found id: ""
	I0120 14:05:56.562533 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.562546 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:05:56.562554 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:05:56.562652 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:05:56.604745 1971155 cri.go:89] found id: ""
	I0120 14:05:56.604776 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.604787 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:05:56.604795 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:05:56.604867 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:05:56.645202 1971155 cri.go:89] found id: ""
	I0120 14:05:56.645245 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.645259 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:05:56.645268 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:05:56.645343 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:05:56.686351 1971155 cri.go:89] found id: ""
	I0120 14:05:56.686379 1971155 logs.go:282] 0 containers: []
	W0120 14:05:56.686388 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:05:56.686405 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:05:56.686419 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0120 14:05:56.700157 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:05:56.700206 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:05:56.780260 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:05:56.780289 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:05:56.780306 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:05:56.859551 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:05:56.859590 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:05:56.900940 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:05:56.900970 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:05:56.027202 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:58.526277 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:00.527173 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:05:59.457051 1971155 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:05:59.472587 1971155 kubeadm.go:597] duration metric: took 4m3.227513478s to restartPrimaryControlPlane
	W0120 14:05:59.472685 1971155 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:05:59.472723 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:06:01.310474 1971155 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.837720995s)
	I0120 14:06:01.310572 1971155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:06:01.327408 1971155 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:06:01.339235 1971155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:06:01.350183 1971155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:06:01.350209 1971155 kubeadm.go:157] found existing configuration files:
	
	I0120 14:06:01.350259 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:06:01.361183 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:06:01.361270 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:06:01.372352 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:06:01.382976 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:06:01.383040 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:06:01.394492 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:06:01.405628 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:06:01.405694 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:06:01.417040 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:06:01.428807 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:06:01.428872 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:06:01.441345 1971155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:06:01.698918 1971155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:06:03.025832 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:05.026627 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:07.027188 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:09.028290 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:11.031964 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:13.525789 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:15.526985 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:18.026476 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:20.027814 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:22.526030 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:24.526922 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:26.527440 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:28.528148 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:31.026333 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:33.527109 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:36.027336 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:38.526086 1971324 pod_ready.go:103] pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace has status "Ready":"False"
	I0120 14:06:39.020400 1971324 pod_ready.go:82] duration metric: took 4m0.001084886s for pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace to be "Ready" ...
	E0120 14:06:39.020434 1971324 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-wgptn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0120 14:06:39.020464 1971324 pod_ready.go:39] duration metric: took 4m13.544546991s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:06:39.020512 1971324 kubeadm.go:597] duration metric: took 4m20.388785998s to restartPrimaryControlPlane
	W0120 14:06:39.020594 1971324 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0120 14:06:39.020633 1971324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:07:06.810143 1971324 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.789476664s)
	I0120 14:07:06.810247 1971324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:07:06.832457 1971324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0120 14:07:06.852749 1971324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:07:06.873857 1971324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:07:06.873882 1971324 kubeadm.go:157] found existing configuration files:
	
	I0120 14:07:06.873943 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0120 14:07:06.886791 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:07:06.886875 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:07:06.909304 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0120 14:07:06.925495 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:07:06.925578 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:07:06.946915 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0120 14:07:06.958045 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:07:06.958118 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:07:06.969792 1971324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0120 14:07:06.980477 1971324 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:07:06.980546 1971324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:07:06.992154 1971324 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:07:07.047808 1971324 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I0120 14:07:07.048054 1971324 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:07:07.167444 1971324 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:07:07.167631 1971324 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:07:07.167755 1971324 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0120 14:07:07.176704 1971324 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:07:07.178906 1971324 out.go:235]   - Generating certificates and keys ...
	I0120 14:07:07.179018 1971324 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:07:07.179096 1971324 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:07:07.179214 1971324 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:07:07.179292 1971324 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:07:07.179407 1971324 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:07:07.179531 1971324 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:07:07.179632 1971324 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:07:07.179728 1971324 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:07:07.179830 1971324 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:07:07.179923 1971324 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:07:07.180006 1971324 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:07:07.180105 1971324 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:07:07.399949 1971324 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:07:07.525338 1971324 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0120 14:07:07.958528 1971324 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:07:08.085273 1971324 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:07:08.227675 1971324 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:07:08.228174 1971324 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:07:08.230880 1971324 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:07:08.232690 1971324 out.go:235]   - Booting up control plane ...
	I0120 14:07:08.232803 1971324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:07:08.232885 1971324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:07:08.233165 1971324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:07:08.255003 1971324 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:07:08.263855 1971324 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:07:08.263966 1971324 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:07:08.414539 1971324 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0120 14:07:08.414702 1971324 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0120 14:07:08.915282 1971324 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.191909ms
	I0120 14:07:08.915410 1971324 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0120 14:07:14.418359 1971324 kubeadm.go:310] [api-check] The API server is healthy after 5.50145508s
	I0120 14:07:14.430935 1971324 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0120 14:07:14.460608 1971324 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0120 14:07:14.497450 1971324 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0120 14:07:14.497787 1971324 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-727256 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0120 14:07:14.515343 1971324 kubeadm.go:310] [bootstrap-token] Using token: tkd27p.2n22jx81j70drifi
	I0120 14:07:14.516953 1971324 out.go:235]   - Configuring RBAC rules ...
	I0120 14:07:14.517145 1971324 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0120 14:07:14.535550 1971324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0120 14:07:14.549490 1971324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0120 14:07:14.554516 1971324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0120 14:07:14.559606 1971324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0120 14:07:14.567744 1971324 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0120 14:07:14.823696 1971324 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0120 14:07:15.255724 1971324 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0120 14:07:15.828061 1971324 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0120 14:07:15.829612 1971324 kubeadm.go:310] 
	I0120 14:07:15.829720 1971324 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0120 14:07:15.829734 1971324 kubeadm.go:310] 
	I0120 14:07:15.829934 1971324 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0120 14:07:15.829961 1971324 kubeadm.go:310] 
	I0120 14:07:15.829995 1971324 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0120 14:07:15.830134 1971324 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0120 14:07:15.830216 1971324 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0120 14:07:15.830238 1971324 kubeadm.go:310] 
	I0120 14:07:15.830300 1971324 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0120 14:07:15.830307 1971324 kubeadm.go:310] 
	I0120 14:07:15.830345 1971324 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0120 14:07:15.830351 1971324 kubeadm.go:310] 
	I0120 14:07:15.830452 1971324 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0120 14:07:15.830564 1971324 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0120 14:07:15.830687 1971324 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0120 14:07:15.830712 1971324 kubeadm.go:310] 
	I0120 14:07:15.830839 1971324 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0120 14:07:15.830917 1971324 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0120 14:07:15.830928 1971324 kubeadm.go:310] 
	I0120 14:07:15.831050 1971324 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token tkd27p.2n22jx81j70drifi \
	I0120 14:07:15.831203 1971324 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 \
	I0120 14:07:15.831236 1971324 kubeadm.go:310] 	--control-plane 
	I0120 14:07:15.831250 1971324 kubeadm.go:310] 
	I0120 14:07:15.831373 1971324 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0120 14:07:15.831381 1971324 kubeadm.go:310] 
	I0120 14:07:15.831510 1971324 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token tkd27p.2n22jx81j70drifi \
	I0120 14:07:15.831608 1971324 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:79a891e23663b293f696fbf23d5c27a98dbc122c7f94b4de1df7fdd66282a2e2 
	I0120 14:07:15.832608 1971324 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:07:15.832644 1971324 cni.go:84] Creating CNI manager for ""
	I0120 14:07:15.832665 1971324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 14:07:15.834574 1971324 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0120 14:07:15.836200 1971324 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0120 14:07:15.852486 1971324 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0120 14:07:15.883072 1971324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0120 14:07:15.883163 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:15.883217 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-727256 minikube.k8s.io/updated_at=2025_01_20T14_07_15_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f660fd437a405b9b88cc818704e12bd22ce270c3 minikube.k8s.io/name=default-k8s-diff-port-727256 minikube.k8s.io/primary=true
	I0120 14:07:15.919057 1971324 ops.go:34] apiserver oom_adj: -16
	I0120 14:07:16.264800 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:16.765768 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:17.265700 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:17.765591 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:18.265120 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:18.765375 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:19.265828 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:19.765233 1971324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0120 14:07:19.871124 1971324 kubeadm.go:1113] duration metric: took 3.988031359s to wait for elevateKubeSystemPrivileges
	I0120 14:07:19.871168 1971324 kubeadm.go:394] duration metric: took 5m1.294931591s to StartCluster
	I0120 14:07:19.871195 1971324 settings.go:142] acquiring lock: {Name:mka51395421b81550a6a78e164d8aa21a9ede347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:07:19.871308 1971324 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 14:07:19.872935 1971324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20242-1920423/kubeconfig: {Name:mk37d73bdcdfa9f8496ce3e26889231440c7ce54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0120 14:07:19.873227 1971324 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.104 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0120 14:07:19.873360 1971324 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0120 14:07:19.873432 1971324 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-727256"
	I0120 14:07:19.873448 1971324 config.go:182] Loaded profile config "default-k8s-diff-port-727256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 14:07:19.873475 1971324 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-727256"
	I0120 14:07:19.873456 1971324 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-727256"
	W0120 14:07:19.873525 1971324 addons.go:247] addon storage-provisioner should already be in state true
	I0120 14:07:19.873515 1971324 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-727256"
	I0120 14:07:19.873512 1971324 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-727256"
	I0120 14:07:19.873579 1971324 host.go:66] Checking if "default-k8s-diff-port-727256" exists ...
	I0120 14:07:19.873591 1971324 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-727256"
	W0120 14:07:19.873602 1971324 addons.go:247] addon dashboard should already be in state true
	I0120 14:07:19.873461 1971324 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-727256"
	I0120 14:07:19.873645 1971324 host.go:66] Checking if "default-k8s-diff-port-727256" exists ...
	I0120 14:07:19.873644 1971324 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-727256"
	W0120 14:07:19.873658 1971324 addons.go:247] addon metrics-server should already be in state true
	I0120 14:07:19.873693 1971324 host.go:66] Checking if "default-k8s-diff-port-727256" exists ...
	I0120 14:07:19.873994 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.874028 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.874069 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.874104 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.874122 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.874160 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.874182 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.874249 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.875156 1971324 out.go:177] * Verifying Kubernetes components...
	I0120 14:07:19.877548 1971324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0120 14:07:19.894903 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41867
	I0120 14:07:19.895611 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I0120 14:07:19.895799 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
	I0120 14:07:19.895810 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46707
	I0120 14:07:19.896235 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.896371 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.896374 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.896427 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.896946 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.896965 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.897049 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.897061 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.897097 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.897109 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.897171 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.897179 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.897407 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.897504 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.897554 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.897763 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.897815 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.898170 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.898210 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.898503 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.898556 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.899598 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.899642 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.901013 1971324 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-727256"
	W0120 14:07:19.901024 1971324 addons.go:247] addon default-storageclass should already be in state true
	I0120 14:07:19.901047 1971324 host.go:66] Checking if "default-k8s-diff-port-727256" exists ...
	I0120 14:07:19.901256 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.901294 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.921489 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33493
	I0120 14:07:19.922200 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.922354 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
	I0120 14:07:19.922487 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0120 14:07:19.923012 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.923115 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.923351 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.923371 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.923750 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.923773 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.923903 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.924012 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.924035 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.924227 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.925245 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.925523 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.926174 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.926409 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.926777 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35375
	I0120 14:07:19.927338 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.927812 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:07:19.928588 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.928606 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.928749 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:07:19.928849 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:07:19.929144 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.929629 1971324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 14:07:19.929667 1971324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 14:07:19.930118 1971324 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0120 14:07:19.931197 1971324 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0120 14:07:19.931224 1971324 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0120 14:07:19.933008 1971324 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0120 14:07:19.933033 1971324 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0120 14:07:19.933058 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:07:19.933259 1971324 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:07:19.933369 1971324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0120 14:07:19.933389 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:07:19.933347 1971324 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0120 14:07:19.934800 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0120 14:07:19.934818 1971324 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0120 14:07:19.934847 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:07:19.937550 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.937957 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:07:19.937999 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.938124 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:07:19.938295 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:07:19.938406 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:07:19.938486 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:07:19.938817 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.940255 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.940625 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:07:19.940648 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.940917 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:07:19.940993 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:07:19.941018 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.941159 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:07:19.941305 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:07:19.941350 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:07:19.941478 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:07:19.941512 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:07:19.941902 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:07:19.942284 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:07:19.948962 1971324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34819
	I0120 14:07:19.949405 1971324 main.go:141] libmachine: () Calling .GetVersion
	I0120 14:07:19.949966 1971324 main.go:141] libmachine: Using API Version  1
	I0120 14:07:19.949989 1971324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 14:07:19.950388 1971324 main.go:141] libmachine: () Calling .GetMachineName
	I0120 14:07:19.950699 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetState
	I0120 14:07:19.952288 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .DriverName
	I0120 14:07:19.952507 1971324 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0120 14:07:19.952523 1971324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0120 14:07:19.952542 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHHostname
	I0120 14:07:19.956242 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.956713 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:90:f7", ip: ""} in network mk-default-k8s-diff-port-727256: {Iface:virbr4 ExpiryTime:2025-01-20 14:58:41 +0000 UTC Type:0 Mac:52:54:00:59:90:f7 Iaid: IPaddr:192.168.72.104 Prefix:24 Hostname:default-k8s-diff-port-727256 Clientid:01:52:54:00:59:90:f7}
	I0120 14:07:19.956743 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | domain default-k8s-diff-port-727256 has defined IP address 192.168.72.104 and MAC address 52:54:00:59:90:f7 in network mk-default-k8s-diff-port-727256
	I0120 14:07:19.956859 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHPort
	I0120 14:07:19.957008 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHKeyPath
	I0120 14:07:19.957169 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .GetSSHUsername
	I0120 14:07:19.957470 1971324 sshutil.go:53] new ssh client: &{IP:192.168.72.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/default-k8s-diff-port-727256/id_rsa Username:docker}
	I0120 14:07:20.127114 1971324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0120 14:07:20.154612 1971324 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-727256" to be "Ready" ...
	I0120 14:07:20.192263 1971324 node_ready.go:49] node "default-k8s-diff-port-727256" has status "Ready":"True"
	I0120 14:07:20.192290 1971324 node_ready.go:38] duration metric: took 37.635597ms for node "default-k8s-diff-port-727256" to be "Ready" ...
	I0120 14:07:20.192301 1971324 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:07:20.213859 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0120 14:07:20.213892 1971324 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0120 14:07:20.231942 1971324 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:20.258778 1971324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0120 14:07:20.282980 1971324 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0120 14:07:20.283031 1971324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0120 14:07:20.283840 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0120 14:07:20.283868 1971324 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0120 14:07:20.313871 1971324 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0120 14:07:20.313902 1971324 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0120 14:07:20.343875 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0120 14:07:20.343906 1971324 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0120 14:07:20.366130 1971324 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:07:20.366161 1971324 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0120 14:07:20.377530 1971324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0120 14:07:20.391855 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0120 14:07:20.391890 1971324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0120 14:07:20.422771 1971324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0120 14:07:20.490042 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0120 14:07:20.490070 1971324 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0120 14:07:20.668552 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:20.668581 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:20.668941 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:20.669010 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:20.669026 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:20.669028 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:20.669036 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:20.669363 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:20.669390 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:20.675996 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:20.676026 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:20.676331 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:20.676388 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:20.676354 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:20.680026 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0120 14:07:20.680052 1971324 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0120 14:07:20.807657 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0120 14:07:20.807698 1971324 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0120 14:07:20.876039 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0120 14:07:20.876068 1971324 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0120 14:07:20.999452 1971324 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:07:20.999483 1971324 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0120 14:07:21.023485 1971324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0120 14:07:21.643979 1971324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266406433s)
	I0120 14:07:21.644056 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:21.644071 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:21.644447 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:21.644477 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:21.644506 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:21.644521 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:21.644831 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:21.644845 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:22.256978 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace has status "Ready":"False"
	I0120 14:07:22.324244 1971324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.901426994s)
	I0120 14:07:22.324341 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:22.324361 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:22.324787 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:22.324849 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:22.324866 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:22.324875 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:22.324883 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:22.325248 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:22.325278 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:22.325285 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:22.325302 1971324 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-727256"
	I0120 14:07:23.339621 1971324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.316057578s)
	I0120 14:07:23.339712 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:23.339732 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:23.340118 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:23.340201 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:23.340216 1971324 main.go:141] libmachine: Making call to close driver server
	I0120 14:07:23.340225 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) Calling .Close
	I0120 14:07:23.340136 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:23.340517 1971324 main.go:141] libmachine: (default-k8s-diff-port-727256) DBG | Closing plugin on server side
	I0120 14:07:23.342106 1971324 main.go:141] libmachine: Successfully made call to close driver server
	I0120 14:07:23.342125 1971324 main.go:141] libmachine: Making call to close connection to plugin binary
	I0120 14:07:23.343861 1971324 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-727256 addons enable metrics-server
	
	I0120 14:07:23.345414 1971324 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0120 14:07:23.346269 1971324 addons.go:514] duration metric: took 3.472914176s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0120 14:07:24.739396 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace has status "Ready":"False"
	I0120 14:07:26.739597 1971324 pod_ready.go:103] pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace has status "Ready":"False"
	I0120 14:07:27.738986 1971324 pod_ready.go:93] pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.739017 1971324 pod_ready.go:82] duration metric: took 7.507037469s for pod "coredns-668d6bf9bc-l4rmh" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.739032 1971324 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-v22vm" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.745501 1971324 pod_ready.go:93] pod "coredns-668d6bf9bc-v22vm" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.745528 1971324 pod_ready.go:82] duration metric: took 6.487852ms for pod "coredns-668d6bf9bc-v22vm" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.745540 1971324 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.750780 1971324 pod_ready.go:93] pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.750815 1971324 pod_ready.go:82] duration metric: took 5.263354ms for pod "etcd-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.750829 1971324 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.757357 1971324 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.757387 1971324 pod_ready.go:82] duration metric: took 6.549516ms for pod "kube-apiserver-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.757400 1971324 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.763302 1971324 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:27.763332 1971324 pod_ready.go:82] duration metric: took 5.92298ms for pod "kube-controller-manager-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:27.763347 1971324 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6vtjs" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:28.139358 1971324 pod_ready.go:93] pod "kube-proxy-6vtjs" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:28.139385 1971324 pod_ready.go:82] duration metric: took 376.030461ms for pod "kube-proxy-6vtjs" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:28.139395 1971324 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:28.536558 1971324 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace has status "Ready":"True"
	I0120 14:07:28.536595 1971324 pod_ready.go:82] duration metric: took 397.192361ms for pod "kube-scheduler-default-k8s-diff-port-727256" in "kube-system" namespace to be "Ready" ...
	I0120 14:07:28.536609 1971324 pod_ready.go:39] duration metric: took 8.344296802s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0120 14:07:28.536633 1971324 api_server.go:52] waiting for apiserver process to appear ...
	I0120 14:07:28.536700 1971324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 14:07:28.573027 1971324 api_server.go:72] duration metric: took 8.699758175s to wait for apiserver process to appear ...
	I0120 14:07:28.573068 1971324 api_server.go:88] waiting for apiserver healthz status ...
	I0120 14:07:28.573101 1971324 api_server.go:253] Checking apiserver healthz at https://192.168.72.104:8444/healthz ...
	I0120 14:07:28.578383 1971324 api_server.go:279] https://192.168.72.104:8444/healthz returned 200:
	ok
	I0120 14:07:28.579376 1971324 api_server.go:141] control plane version: v1.32.0
	I0120 14:07:28.579402 1971324 api_server.go:131] duration metric: took 6.325441ms to wait for apiserver health ...
	I0120 14:07:28.579413 1971324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0120 14:07:28.743059 1971324 system_pods.go:59] 9 kube-system pods found
	I0120 14:07:28.743094 1971324 system_pods.go:61] "coredns-668d6bf9bc-l4rmh" [06f4698d-c393-4f30-b8de-77ade02b575e] Running
	I0120 14:07:28.743100 1971324 system_pods.go:61] "coredns-668d6bf9bc-v22vm" [95644362-4ab9-405f-b433-5b384ab083d1] Running
	I0120 14:07:28.743104 1971324 system_pods.go:61] "etcd-default-k8s-diff-port-727256" [888345c9-ff71-44eb-9501-6a878f6e7fce] Running
	I0120 14:07:28.743108 1971324 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-727256" [2c11d7e2-9f34-4861-977b-7559572c5eb9] Running
	I0120 14:07:28.743111 1971324 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-727256" [f6202668-dca8-46a8-9ac2-d58b96bda552] Running
	I0120 14:07:28.743115 1971324 system_pods.go:61] "kube-proxy-6vtjs" [d57cfd3b-d6bd-4e61-a606-b2451a3768ca] Running
	I0120 14:07:28.743118 1971324 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-727256" [764e1f75-6402-4ce2-9d44-5d8af5dbb0e8] Running
	I0120 14:07:28.743124 1971324 system_pods.go:61] "metrics-server-f79f97bbb-kp5hl" [190513f9-3e9f-4705-ae23-9481987802f1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0120 14:07:28.743129 1971324 system_pods.go:61] "storage-provisioner" [0f716b6a-f5d2-49a0-a810-e0cdf72a3020] Running
	I0120 14:07:28.743136 1971324 system_pods.go:74] duration metric: took 163.71699ms to wait for pod list to return data ...
	I0120 14:07:28.743145 1971324 default_sa.go:34] waiting for default service account to be created ...
	I0120 14:07:28.937247 1971324 default_sa.go:45] found service account: "default"
	I0120 14:07:28.937280 1971324 default_sa.go:55] duration metric: took 194.12949ms for default service account to be created ...
	I0120 14:07:28.937291 1971324 system_pods.go:137] waiting for k8s-apps to be running ...
	I0120 14:07:29.391088 1971324 system_pods.go:87] 9 kube-system pods found
	I0120 14:07:57.893064 1971155 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 14:07:57.893206 1971155 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 14:07:57.895047 1971155 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 14:07:57.895110 1971155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:07:57.895204 1971155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:07:57.895358 1971155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:07:57.895455 1971155 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 14:07:57.895510 1971155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:07:57.897667 1971155 out.go:235]   - Generating certificates and keys ...
	I0120 14:07:57.897769 1971155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:07:57.897859 1971155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:07:57.897979 1971155 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:07:57.898089 1971155 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:07:57.898184 1971155 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:07:57.898261 1971155 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:07:57.898370 1971155 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:07:57.898473 1971155 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:07:57.898549 1971155 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:07:57.898650 1971155 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:07:57.898706 1971155 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:07:57.898808 1971155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:07:57.898866 1971155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:07:57.898917 1971155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:07:57.898971 1971155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:07:57.899018 1971155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:07:57.899156 1971155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:07:57.899270 1971155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:07:57.899322 1971155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:07:57.899385 1971155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:07:57.900907 1971155 out.go:235]   - Booting up control plane ...
	I0120 14:07:57.901012 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:07:57.901098 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:07:57.901183 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:07:57.901301 1971155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:07:57.901498 1971155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 14:07:57.901549 1971155 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 14:07:57.901614 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.901802 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.901862 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.902008 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.902071 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.902248 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.902332 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.902476 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.902532 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:07:57.902723 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:07:57.902740 1971155 kubeadm.go:310] 
	I0120 14:07:57.902798 1971155 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 14:07:57.902913 1971155 kubeadm.go:310] 		timed out waiting for the condition
	I0120 14:07:57.902942 1971155 kubeadm.go:310] 
	I0120 14:07:57.902990 1971155 kubeadm.go:310] 	This error is likely caused by:
	I0120 14:07:57.903050 1971155 kubeadm.go:310] 		- The kubelet is not running
	I0120 14:07:57.903175 1971155 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 14:07:57.903185 1971155 kubeadm.go:310] 
	I0120 14:07:57.903309 1971155 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 14:07:57.903358 1971155 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 14:07:57.903406 1971155 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 14:07:57.903415 1971155 kubeadm.go:310] 
	I0120 14:07:57.903535 1971155 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 14:07:57.903608 1971155 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 14:07:57.903614 1971155 kubeadm.go:310] 
	I0120 14:07:57.903742 1971155 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 14:07:57.903828 1971155 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 14:07:57.903894 1971155 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 14:07:57.903959 1971155 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 14:07:57.903970 1971155 kubeadm.go:310] 
	W0120 14:07:57.904147 1971155 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0120 14:07:57.904205 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0120 14:07:58.379343 1971155 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 14:07:58.394094 1971155 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0120 14:07:58.405184 1971155 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0120 14:07:58.405214 1971155 kubeadm.go:157] found existing configuration files:
	
	I0120 14:07:58.405275 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0120 14:07:58.415126 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0120 14:07:58.415190 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0120 14:07:58.425525 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0120 14:07:58.435286 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0120 14:07:58.435402 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0120 14:07:58.445346 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0120 14:07:58.455338 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0120 14:07:58.455400 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0120 14:07:58.465346 1971155 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0120 14:07:58.474739 1971155 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0120 14:07:58.474821 1971155 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0120 14:07:58.484664 1971155 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0120 14:07:58.559434 1971155 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0120 14:07:58.559546 1971155 kubeadm.go:310] [preflight] Running pre-flight checks
	I0120 14:07:58.713832 1971155 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0120 14:07:58.713978 1971155 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0120 14:07:58.714110 1971155 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0120 14:07:58.902142 1971155 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0120 14:07:58.904151 1971155 out.go:235]   - Generating certificates and keys ...
	I0120 14:07:58.904252 1971155 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0120 14:07:58.904340 1971155 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0120 14:07:58.904451 1971155 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0120 14:07:58.904532 1971155 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0120 14:07:58.904662 1971155 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0120 14:07:58.904752 1971155 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0120 14:07:58.904850 1971155 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0120 14:07:58.904938 1971155 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0120 14:07:58.905078 1971155 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0120 14:07:58.905203 1971155 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0120 14:07:58.905255 1971155 kubeadm.go:310] [certs] Using the existing "sa" key
	I0120 14:07:58.905311 1971155 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0120 14:07:59.059284 1971155 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0120 14:07:59.367307 1971155 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0120 14:07:59.478773 1971155 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0120 14:07:59.769599 1971155 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0120 14:07:59.795017 1971155 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0120 14:07:59.796387 1971155 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0120 14:07:59.796440 1971155 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0120 14:07:59.967182 1971155 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0120 14:07:59.969049 1971155 out.go:235]   - Booting up control plane ...
	I0120 14:07:59.969210 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0120 14:07:59.969498 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0120 14:07:59.978995 1971155 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0120 14:07:59.980298 1971155 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0120 14:07:59.983629 1971155 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0120 14:08:39.986873 1971155 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0120 14:08:39.986972 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:08:39.987222 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:08:44.987592 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:08:44.987868 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:08:54.988530 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:08:54.988725 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:09:14.990244 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:09:14.990492 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:09:54.990993 1971155 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0120 14:09:54.991340 1971155 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0120 14:09:54.991370 1971155 kubeadm.go:310] 
	I0120 14:09:54.991419 1971155 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0120 14:09:54.991474 1971155 kubeadm.go:310] 		timed out waiting for the condition
	I0120 14:09:54.991485 1971155 kubeadm.go:310] 
	I0120 14:09:54.991536 1971155 kubeadm.go:310] 	This error is likely caused by:
	I0120 14:09:54.991582 1971155 kubeadm.go:310] 		- The kubelet is not running
	I0120 14:09:54.991734 1971155 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0120 14:09:54.991760 1971155 kubeadm.go:310] 
	I0120 14:09:54.991930 1971155 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0120 14:09:54.991981 1971155 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0120 14:09:54.992034 1971155 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0120 14:09:54.992065 1971155 kubeadm.go:310] 
	I0120 14:09:54.992234 1971155 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0120 14:09:54.992326 1971155 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0120 14:09:54.992342 1971155 kubeadm.go:310] 
	I0120 14:09:54.992508 1971155 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0120 14:09:54.992650 1971155 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0120 14:09:54.992786 1971155 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0120 14:09:54.992894 1971155 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0120 14:09:54.992904 1971155 kubeadm.go:310] 
	I0120 14:09:54.994025 1971155 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0120 14:09:54.994123 1971155 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0120 14:09:54.994214 1971155 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0120 14:09:54.994325 1971155 kubeadm.go:394] duration metric: took 7m58.806679255s to StartCluster
	I0120 14:09:54.994398 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0120 14:09:54.994475 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0120 14:09:55.044299 1971155 cri.go:89] found id: ""
	I0120 14:09:55.044338 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.044350 1971155 logs.go:284] No container was found matching "kube-apiserver"
	I0120 14:09:55.044359 1971155 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0120 14:09:55.044434 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0120 14:09:55.088726 1971155 cri.go:89] found id: ""
	I0120 14:09:55.088759 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.088767 1971155 logs.go:284] No container was found matching "etcd"
	I0120 14:09:55.088774 1971155 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0120 14:09:55.088848 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0120 14:09:55.127484 1971155 cri.go:89] found id: ""
	I0120 14:09:55.127513 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.127523 1971155 logs.go:284] No container was found matching "coredns"
	I0120 14:09:55.127531 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0120 14:09:55.127602 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0120 14:09:55.167042 1971155 cri.go:89] found id: ""
	I0120 14:09:55.167079 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.167091 1971155 logs.go:284] No container was found matching "kube-scheduler"
	I0120 14:09:55.167100 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0120 14:09:55.167173 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0120 14:09:55.206075 1971155 cri.go:89] found id: ""
	I0120 14:09:55.206111 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.206122 1971155 logs.go:284] No container was found matching "kube-proxy"
	I0120 14:09:55.206128 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0120 14:09:55.206184 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0120 14:09:55.262849 1971155 cri.go:89] found id: ""
	I0120 14:09:55.262895 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.262907 1971155 logs.go:284] No container was found matching "kube-controller-manager"
	I0120 14:09:55.262917 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0120 14:09:55.262989 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0120 14:09:55.303064 1971155 cri.go:89] found id: ""
	I0120 14:09:55.303102 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.303114 1971155 logs.go:284] No container was found matching "kindnet"
	I0120 14:09:55.303122 1971155 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0120 14:09:55.303190 1971155 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0120 14:09:55.339202 1971155 cri.go:89] found id: ""
	I0120 14:09:55.339237 1971155 logs.go:282] 0 containers: []
	W0120 14:09:55.339248 1971155 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0120 14:09:55.339262 1971155 logs.go:123] Gathering logs for describe nodes ...
	I0120 14:09:55.339279 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0120 14:09:55.425991 1971155 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0120 14:09:55.426022 1971155 logs.go:123] Gathering logs for CRI-O ...
	I0120 14:09:55.426042 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0120 14:09:55.529413 1971155 logs.go:123] Gathering logs for container status ...
	I0120 14:09:55.529454 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0120 14:09:55.574927 1971155 logs.go:123] Gathering logs for kubelet ...
	I0120 14:09:55.574965 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0120 14:09:55.631464 1971155 logs.go:123] Gathering logs for dmesg ...
	I0120 14:09:55.631507 1971155 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0120 14:09:55.647055 1971155 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0120 14:09:55.647121 1971155 out.go:270] * 
	W0120 14:09:55.647197 1971155 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 14:09:55.647230 1971155 out.go:270] * 
	W0120 14:09:55.648431 1971155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0120 14:09:55.652120 1971155 out.go:201] 
	W0120 14:09:55.653811 1971155 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0120 14:09:55.653880 1971155 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0120 14:09:55.653909 1971155 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0120 14:09:55.655598 1971155 out.go:201] 
	
	
	==> CRI-O <==
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.563229191Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383121563187105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08046a07-2da9-4676-8038-d8eb79017ea5 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.563958492Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b67eb5de-d0b3-4508-bd90-055182a04ba3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.564025109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b67eb5de-d0b3-4508-bd90-055182a04ba3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.564079555Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b67eb5de-d0b3-4508-bd90-055182a04ba3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.600130960Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4352898e-ae68-41fd-8bed-75d752a1fb91 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.600204808Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4352898e-ae68-41fd-8bed-75d752a1fb91 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.601337180Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=628591d9-c32d-4101-9b53-3f55b144dc13 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.601799807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383121601767431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=628591d9-c32d-4101-9b53-3f55b144dc13 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.602417673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e179341-27ff-4650-826d-ecfa5ea09db4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.602473271Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e179341-27ff-4650-826d-ecfa5ea09db4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.602510416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2e179341-27ff-4650-826d-ecfa5ea09db4 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.637056410Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b41f81df-8b91-41ad-96c8-638793ff8c15 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.637162131Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b41f81df-8b91-41ad-96c8-638793ff8c15 name=/runtime.v1.RuntimeService/Version
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.638758215Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cdd04762-1c98-45fd-a0d6-d0e38091289e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.639140182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383121639112165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cdd04762-1c98-45fd-a0d6-d0e38091289e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.639837145Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da61ce69-77c6-4ca9-a290-de53c5295795 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.639885961Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da61ce69-77c6-4ca9-a290-de53c5295795 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.639918717Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=da61ce69-77c6-4ca9-a290-de53c5295795 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.677072986Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=220b53c6-580e-4832-befd-0d725b06449b name=/runtime.v1.RuntimeService/Version
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.677148009Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=220b53c6-580e-4832-befd-0d725b06449b name=/runtime.v1.RuntimeService/Version
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.678801267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0cfdc93a-fef9-4931-9375-71db619d636d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.679170811Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737383121679140807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0cfdc93a-fef9-4931-9375-71db619d636d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.679931352Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7896dff1-05ea-4ebf-bcd6-2a4ff0e43bc7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.679988449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7896dff1-05ea-4ebf-bcd6-2a4ff0e43bc7 name=/runtime.v1.RuntimeService/ListContainers
	Jan 20 14:25:21 old-k8s-version-191446 crio[632]: time="2025-01-20 14:25:21.680021435Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7896dff1-05ea-4ebf-bcd6-2a4ff0e43bc7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan20 14:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.057095] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044027] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.065658] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.960481] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.662559] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.919769] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.062908] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080713] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.237108] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.143710] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.284512] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.705620] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +0.060994] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.033318] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[Jan20 14:02] kauditd_printk_skb: 46 callbacks suppressed
	[Jan20 14:06] systemd-fstab-generator[5027]: Ignoring "noauto" option for root device
	[Jan20 14:07] systemd-fstab-generator[5311]: Ignoring "noauto" option for root device
	[  +0.070549] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:25:21 up 23 min,  0 users,  load average: 0.20, 0.08, 0.06
	Linux old-k8s-version-191446 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jan 20 14:25:16 old-k8s-version-191446 kubelet[7174]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc00093dcb0)
	Jan 20 14:25:16 old-k8s-version-191446 kubelet[7174]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jan 20 14:25:16 old-k8s-version-191446 kubelet[7174]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jan 20 14:25:16 old-k8s-version-191446 kubelet[7174]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jan 20 14:25:16 old-k8s-version-191446 kubelet[7174]: goroutine 156 [select]:
	Jan 20 14:25:16 old-k8s-version-191446 kubelet[7174]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000cbdef0, 0x4f0ac20, 0xc000175f90, 0x1, 0xc00009e0c0)
	Jan 20 14:25:16 old-k8s-version-191446 kubelet[7174]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jan 20 14:25:16 old-k8s-version-191446 kubelet[7174]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000c822a0, 0xc00009e0c0)
	Jan 20 14:25:16 old-k8s-version-191446 kubelet[7174]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jan 20 14:25:16 old-k8s-version-191446 kubelet[7174]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jan 20 14:25:16 old-k8s-version-191446 kubelet[7174]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jan 20 14:25:16 old-k8s-version-191446 kubelet[7174]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0009352e0, 0xc000cb2040)
	Jan 20 14:25:16 old-k8s-version-191446 kubelet[7174]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jan 20 14:25:16 old-k8s-version-191446 kubelet[7174]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jan 20 14:25:16 old-k8s-version-191446 kubelet[7174]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jan 20 14:25:16 old-k8s-version-191446 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 20 14:25:16 old-k8s-version-191446 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 20 14:25:17 old-k8s-version-191446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 180.
	Jan 20 14:25:17 old-k8s-version-191446 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 20 14:25:17 old-k8s-version-191446 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 20 14:25:17 old-k8s-version-191446 kubelet[7183]: I0120 14:25:17.724849    7183 server.go:416] Version: v1.20.0
	Jan 20 14:25:17 old-k8s-version-191446 kubelet[7183]: I0120 14:25:17.725101    7183 server.go:837] Client rotation is on, will bootstrap in background
	Jan 20 14:25:17 old-k8s-version-191446 kubelet[7183]: I0120 14:25:17.727196    7183 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 20 14:25:17 old-k8s-version-191446 kubelet[7183]: I0120 14:25:17.728324    7183 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jan 20 14:25:17 old-k8s-version-191446 kubelet[7183]: W0120 14:25:17.728706    7183 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191446 -n old-k8s-version-191446
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-191446 -n old-k8s-version-191446: exit status 2 (254.206654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-191446" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (381.55s)

                                                
                                    

Test pass (259/311)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.46
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.0/json-events 4.09
13 TestDownloadOnly/v1.32.0/preload-exists 0
17 TestDownloadOnly/v1.32.0/LogsDuration 0.07
18 TestDownloadOnly/v1.32.0/DeleteAll 0.16
19 TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.63
22 TestOffline 62.84
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 132.42
31 TestAddons/serial/GCPAuth/Namespaces 1.3
32 TestAddons/serial/GCPAuth/FakeCredentials 7.53
35 TestAddons/parallel/Registry 19.62
37 TestAddons/parallel/InspektorGadget 11.33
38 TestAddons/parallel/MetricsServer 6.58
40 TestAddons/parallel/CSI 49.19
41 TestAddons/parallel/Headlamp 21.07
42 TestAddons/parallel/CloudSpanner 5.95
43 TestAddons/parallel/LocalPath 10.24
44 TestAddons/parallel/NvidiaDevicePlugin 5.56
45 TestAddons/parallel/Yakd 11.47
47 TestAddons/StoppedEnableDisable 91.12
48 TestCertOptions 47.02
49 TestCertExpiration 264.23
51 TestForceSystemdFlag 61.93
52 TestForceSystemdEnv 86.16
54 TestKVMDriverInstallOrUpdate 3.68
58 TestErrorSpam/setup 42.35
59 TestErrorSpam/start 0.38
60 TestErrorSpam/status 0.78
61 TestErrorSpam/pause 1.69
62 TestErrorSpam/unpause 1.76
63 TestErrorSpam/stop 5.29
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 83.63
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 54.24
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.68
75 TestFunctional/serial/CacheCmd/cache/add_local 1.45
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.75
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 33.27
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.57
86 TestFunctional/serial/LogsFileCmd 1.54
87 TestFunctional/serial/InvalidService 4.22
89 TestFunctional/parallel/ConfigCmd 0.41
90 TestFunctional/parallel/DashboardCmd 19.51
91 TestFunctional/parallel/DryRun 0.3
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 1.25
97 TestFunctional/parallel/ServiceCmdConnect 11.57
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 44.16
101 TestFunctional/parallel/SSHCmd 0.45
102 TestFunctional/parallel/CpCmd 1.51
103 TestFunctional/parallel/MySQL 24.74
104 TestFunctional/parallel/FileSync 0.26
105 TestFunctional/parallel/CertSync 1.64
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
113 TestFunctional/parallel/License 0.24
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.93
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.42
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
120 TestFunctional/parallel/ImageCommands/ImageBuild 2.95
121 TestFunctional/parallel/ImageCommands/Setup 0.99
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
132 TestFunctional/parallel/ProfileCmd/profile_list 0.4
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.8
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
135 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.32
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.29
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.57
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.01
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.9
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
145 TestFunctional/parallel/ServiceCmd/List 0.53
146 TestFunctional/parallel/MountCmd/any-port 19.79
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.57
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
149 TestFunctional/parallel/ServiceCmd/Format 0.41
150 TestFunctional/parallel/ServiceCmd/URL 0.33
151 TestFunctional/parallel/MountCmd/specific-port 2.14
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 200.46
160 TestMultiControlPlane/serial/DeployApp 6.18
161 TestMultiControlPlane/serial/PingHostFromPods 1.29
162 TestMultiControlPlane/serial/AddWorkerNode 59
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
165 TestMultiControlPlane/serial/CopyFile 13.94
166 TestMultiControlPlane/serial/StopSecondaryNode 91.74
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
168 TestMultiControlPlane/serial/RestartSecondaryNode 52.2
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 445.83
171 TestMultiControlPlane/serial/DeleteSecondaryNode 18.34
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
173 TestMultiControlPlane/serial/StopCluster 272.4
174 TestMultiControlPlane/serial/RestartCluster 145.14
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
176 TestMultiControlPlane/serial/AddSecondaryNode 81.78
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.92
181 TestJSONOutput/start/Command 59.47
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.76
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.67
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.37
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 92.83
213 TestMountStart/serial/StartWithMountFirst 29.47
214 TestMountStart/serial/VerifyMountFirst 0.4
215 TestMountStart/serial/StartWithMountSecond 26.98
216 TestMountStart/serial/VerifyMountSecond 0.39
217 TestMountStart/serial/DeleteFirst 0.7
218 TestMountStart/serial/VerifyMountPostDelete 0.4
219 TestMountStart/serial/Stop 1.29
220 TestMountStart/serial/RestartStopped 23.41
221 TestMountStart/serial/VerifyMountPostStop 0.4
224 TestMultiNode/serial/FreshStart2Nodes 114.3
225 TestMultiNode/serial/DeployApp2Nodes 4.31
226 TestMultiNode/serial/PingHostFrom2Pods 0.85
227 TestMultiNode/serial/AddNode 52.85
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.61
230 TestMultiNode/serial/CopyFile 7.65
231 TestMultiNode/serial/StopNode 3.2
232 TestMultiNode/serial/StartAfterStop 40.08
233 TestMultiNode/serial/RestartKeepsNodes 344.1
234 TestMultiNode/serial/DeleteNode 2.86
235 TestMultiNode/serial/StopMultiNode 182.16
236 TestMultiNode/serial/RestartMultiNode 113.87
237 TestMultiNode/serial/ValidateNameConflict 46.66
244 TestScheduledStopUnix 116.02
248 TestRunningBinaryUpgrade 214.9
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
262 TestPause/serial/Start 108.23
263 TestNoKubernetes/serial/StartWithK8s 96.55
264 TestNoKubernetes/serial/StartWithStopK8s 44.03
265 TestPause/serial/SecondStartNoReconfiguration 62.88
266 TestNoKubernetes/serial/Start 32.71
274 TestNetworkPlugins/group/false 3.95
278 TestPause/serial/Pause 0.83
279 TestPause/serial/VerifyStatus 0.29
280 TestPause/serial/Unpause 0.75
281 TestPause/serial/PauseAgain 1.04
282 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
284 TestPause/serial/DeletePaused 1.09
285 TestPause/serial/VerifyDeletedResources 14.69
286 TestStoppedBinaryUpgrade/Setup 0.57
287 TestStoppedBinaryUpgrade/Upgrade 96.57
290 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
292 TestStartStop/group/no-preload/serial/FirstStart 99.39
294 TestStartStop/group/embed-certs/serial/FirstStart 75.6
295 TestStartStop/group/no-preload/serial/DeployApp 10.32
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
297 TestStartStop/group/no-preload/serial/Stop 91.22
298 TestStartStop/group/embed-certs/serial/DeployApp 9.28
300 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.18
301 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
302 TestStartStop/group/embed-certs/serial/Stop 91.57
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.06
309 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.1
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
312 TestStartStop/group/old-k8s-version/serial/Stop 3.31
313 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
320 TestStartStop/group/newest-cni/serial/FirstStart 50.08
321 TestNetworkPlugins/group/auto/Start 86.32
322 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.35
324 TestStartStop/group/newest-cni/serial/Stop 7.34
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
326 TestStartStop/group/newest-cni/serial/SecondStart 38.31
327 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
328 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
330 TestStartStop/group/newest-cni/serial/Pause 2.66
331 TestNetworkPlugins/group/kindnet/Start 64.54
332 TestNetworkPlugins/group/calico/Start 104.05
333 TestNetworkPlugins/group/auto/KubeletFlags 0.23
334 TestNetworkPlugins/group/auto/NetCatPod 11.28
335 TestNetworkPlugins/group/auto/DNS 0.17
336 TestNetworkPlugins/group/auto/Localhost 0.12
337 TestNetworkPlugins/group/auto/HairPin 0.14
338 TestNetworkPlugins/group/custom-flannel/Start 88.58
339 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
340 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
341 TestNetworkPlugins/group/kindnet/NetCatPod 10.28
342 TestNetworkPlugins/group/kindnet/DNS 0.25
343 TestNetworkPlugins/group/kindnet/Localhost 0.19
344 TestNetworkPlugins/group/kindnet/HairPin 0.19
345 TestNetworkPlugins/group/enable-default-cni/Start 86.75
346 TestNetworkPlugins/group/flannel/Start 76.55
347 TestNetworkPlugins/group/calico/ControllerPod 6.01
348 TestNetworkPlugins/group/calico/KubeletFlags 0.29
349 TestNetworkPlugins/group/calico/NetCatPod 14.36
350 TestNetworkPlugins/group/calico/DNS 0.19
351 TestNetworkPlugins/group/calico/Localhost 0.14
352 TestNetworkPlugins/group/calico/HairPin 0.13
353 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
354 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
355 TestNetworkPlugins/group/custom-flannel/DNS 0.22
356 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
357 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
358 TestNetworkPlugins/group/bridge/Start 59.86
359 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
360 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
361 TestNetworkPlugins/group/flannel/ControllerPod 6.01
362 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
363 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
364 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
365 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
366 TestNetworkPlugins/group/flannel/NetCatPod 11.59
367 TestNetworkPlugins/group/flannel/DNS 0.2
368 TestNetworkPlugins/group/flannel/Localhost 0.15
369 TestNetworkPlugins/group/flannel/HairPin 0.14
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
371 TestNetworkPlugins/group/bridge/NetCatPod 10.25
372 TestNetworkPlugins/group/bridge/DNS 0.15
373 TestNetworkPlugins/group/bridge/Localhost 0.12
374 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (7.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-567505 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-567505 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.463917973s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0120 12:50:16.487937 1927672 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0120 12:50:16.488043 1927672 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-567505
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-567505: exit status 85 (70.595285ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-567505 | jenkins | v1.35.0 | 20 Jan 25 12:50 UTC |          |
	|         | -p download-only-567505        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:50:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:50:09.071044 1927684 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:50:09.071201 1927684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:50:09.071212 1927684 out.go:358] Setting ErrFile to fd 2...
	I0120 12:50:09.071216 1927684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:50:09.071400 1927684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	W0120 12:50:09.071541 1927684 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20242-1920423/.minikube/config/config.json: open /home/jenkins/minikube-integration/20242-1920423/.minikube/config/config.json: no such file or directory
	I0120 12:50:09.072220 1927684 out.go:352] Setting JSON to true
	I0120 12:50:09.073543 1927684 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":16355,"bootTime":1737361054,"procs":406,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:50:09.073690 1927684 start.go:139] virtualization: kvm guest
	I0120 12:50:09.076160 1927684 out.go:97] [download-only-567505] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0120 12:50:09.076304 1927684 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball: no such file or directory
	I0120 12:50:09.076342 1927684 notify.go:220] Checking for updates...
	I0120 12:50:09.077780 1927684 out.go:169] MINIKUBE_LOCATION=20242
	I0120 12:50:09.079284 1927684 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:50:09.080798 1927684 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 12:50:09.082443 1927684 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 12:50:09.083988 1927684 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0120 12:50:09.086873 1927684 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 12:50:09.087148 1927684 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 12:50:09.122303 1927684 out.go:97] Using the kvm2 driver based on user configuration
	I0120 12:50:09.122346 1927684 start.go:297] selected driver: kvm2
	I0120 12:50:09.122356 1927684 start.go:901] validating driver "kvm2" against <nil>
	I0120 12:50:09.122829 1927684 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:50:09.122933 1927684 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20242-1920423/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0120 12:50:09.140135 1927684 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0120 12:50:09.140192 1927684 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 12:50:09.140775 1927684 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0120 12:50:09.140958 1927684 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 12:50:09.140996 1927684 cni.go:84] Creating CNI manager for ""
	I0120 12:50:09.141066 1927684 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0120 12:50:09.141083 1927684 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0120 12:50:09.141177 1927684 start.go:340] cluster config:
	{Name:download-only-567505 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-567505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 12:50:09.141453 1927684 iso.go:125] acquiring lock: {Name:mk18c81b0efa5cc8efe9d47dc52684752f206ae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0120 12:50:09.143575 1927684 out.go:97] Downloading VM boot image ...
	I0120 12:50:09.143638 1927684 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0120 12:50:12.042253 1927684 out.go:97] Starting "download-only-567505" primary control-plane node in "download-only-567505" cluster
	I0120 12:50:12.042298 1927684 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 12:50:12.070646 1927684 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0120 12:50:12.070703 1927684 cache.go:56] Caching tarball of preloaded images
	I0120 12:50:12.070934 1927684 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0120 12:50:12.072995 1927684 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0120 12:50:12.073033 1927684 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0120 12:50:12.104107 1927684 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-567505 host does not exist
	  To start a cluster, run: "minikube start -p download-only-567505"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-567505
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/json-events (4.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-454309 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-454309 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.088136338s)
--- PASS: TestDownloadOnly/v1.32.0/json-events (4.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/preload-exists
I0120 12:50:20.943956 1927672 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
I0120 12:50:20.944008 1927672 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20242-1920423/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-454309
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-454309: exit status 85 (66.945612ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-567505 | jenkins | v1.35.0 | 20 Jan 25 12:50 UTC |                     |
	|         | -p download-only-567505        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 20 Jan 25 12:50 UTC | 20 Jan 25 12:50 UTC |
	| delete  | -p download-only-567505        | download-only-567505 | jenkins | v1.35.0 | 20 Jan 25 12:50 UTC | 20 Jan 25 12:50 UTC |
	| start   | -o=json --download-only        | download-only-454309 | jenkins | v1.35.0 | 20 Jan 25 12:50 UTC |                     |
	|         | -p download-only-454309        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 12:50:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 12:50:16.901489 1927871 out.go:345] Setting OutFile to fd 1 ...
	I0120 12:50:16.901617 1927871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:50:16.901627 1927871 out.go:358] Setting ErrFile to fd 2...
	I0120 12:50:16.901631 1927871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 12:50:16.901794 1927871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 12:50:16.902405 1927871 out.go:352] Setting JSON to true
	I0120 12:50:16.903583 1927871 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":16363,"bootTime":1737361054,"procs":404,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 12:50:16.903707 1927871 start.go:139] virtualization: kvm guest
	I0120 12:50:16.906156 1927871 out.go:97] [download-only-454309] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 12:50:16.906378 1927871 notify.go:220] Checking for updates...
	I0120 12:50:16.908071 1927871 out.go:169] MINIKUBE_LOCATION=20242
	I0120 12:50:16.909598 1927871 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 12:50:16.911150 1927871 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 12:50:16.912570 1927871 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 12:50:16.914177 1927871 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-454309 host does not exist
	  To start a cluster, run: "minikube start -p download-only-454309"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-454309
--- PASS: TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I0120 12:50:21.602494 1927672 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-946086 --alsologtostderr --binary-mirror http://127.0.0.1:35627 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-946086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-946086
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (62.84s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-926956 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-926956 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.77281112s)
helpers_test.go:175: Cleaning up "offline-crio-926956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-926956
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-926956: (1.064964328s)
--- PASS: TestOffline (62.84s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-917221
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-917221: exit status 85 (59.176219ms)

                                                
                                                
-- stdout --
	* Profile "addons-917221" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-917221"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-917221
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-917221: exit status 85 (59.787164ms)

                                                
                                                
-- stdout --
	* Profile "addons-917221" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-917221"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (132.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-917221 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-917221 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m12.414876991s)
--- PASS: TestAddons/Setup (132.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (1.3s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-917221 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-917221 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-917221 get secret gcp-auth -n new-namespace: exit status 1 (79.43619ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-917221 logs -l app=gcp-auth -n gcp-auth
I0120 12:52:35.305581 1927672 retry.go:31] will retry after 1.021318598s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2025/01/20 12:52:34 GCP Auth Webhook started!
	2025/01/20 12:52:35 Ready to marshal response ...
	2025/01/20 12:52:35 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-917221 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (1.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-917221 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-917221 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d68c02ea-4ab7-49a6-90c8-8ad183045335] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d68c02ea-4ab7-49a6-90c8-8ad183045335] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.004401285s
addons_test.go:633: (dbg) Run:  kubectl --context addons-917221 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-917221 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-917221 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.446722ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-ndwgb" [bca66afd-2437-496e-b76a-e829fe9f5952] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004728471s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-sb2tk" [7dbae7cd-f142-43d6-8e97-d98b3ad5e51a] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003965861s
addons_test.go:331: (dbg) Run:  kubectl --context addons-917221 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-917221 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-917221 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.744411979s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 ip
2025/01/20 12:53:11 [DEBUG] GET http://192.168.39.225:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.33s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-cmqc7" [6088ec5b-e21b-4324-8ae3-bf821273baea] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0243539s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-917221 addons disable inspektor-gadget --alsologtostderr -v=1: (6.303227946s)
--- PASS: TestAddons/parallel/InspektorGadget (11.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 7.018888ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I0120 12:52:52.596737 1927672 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0120 12:52:52.596766 1927672 kapi.go:107] duration metric: took 8.804835ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "metrics-server-7fbb699795-m2dgw" [78861784-257c-4bc8-88cc-1751f08124fa] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004646994s
addons_test.go:402: (dbg) Run:  kubectl --context addons-917221 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-917221 addons disable metrics-server --alsologtostderr -v=1: (1.504648456s)
--- PASS: TestAddons/parallel/MetricsServer (6.58s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.815515ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-917221 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-917221 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1c6b339c-06ef-49e1-a3e0-3b09a0145cdc] Pending
helpers_test.go:344: "task-pv-pod" [1c6b339c-06ef-49e1-a3e0-3b09a0145cdc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1c6b339c-06ef-49e1-a3e0-3b09a0145cdc] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.005683221s
addons_test.go:511: (dbg) Run:  kubectl --context addons-917221 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-917221 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-917221 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-917221 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-917221 delete pod task-pv-pod: (1.251891951s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-917221 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-917221 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-917221 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5dccb816-ce80-4f01-b9a6-fa800a59ec91] Pending
helpers_test.go:344: "task-pv-pod-restore" [5dccb816-ce80-4f01-b9a6-fa800a59ec91] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5dccb816-ce80-4f01-b9a6-fa800a59ec91] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.008922771s
addons_test.go:553: (dbg) Run:  kubectl --context addons-917221 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-917221 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-917221 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-917221 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.001801926s)
--- PASS: TestAddons/parallel/CSI (49.19s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-917221 --alsologtostderr -v=1
I0120 12:52:52.587976 1927672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-917221 --alsologtostderr -v=1: (1.030142665s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-p97sc" [8a87b9f3-2c62-4d68-a373-891aca400e2f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-p97sc" [8a87b9f3-2c62-4d68-a373-891aca400e2f] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004311827s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-917221 addons disable headlamp --alsologtostderr -v=1: (6.031485808s)
--- PASS: TestAddons/parallel/Headlamp (21.07s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-xs25x" [b86869c3-4327-49f7-adcb-8a3f69956acc] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003794065s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.95s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.24s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-917221 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-917221 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917221 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2c351c42-78b9-4c26-861e-18682510fa74] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2c351c42-78b9-4c26-861e-18682510fa74] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2c351c42-78b9-4c26-861e-18682510fa74] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004265212s
addons_test.go:906: (dbg) Run:  kubectl --context addons-917221 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 ssh "cat /opt/local-path-provisioner/pvc-36fb8dda-e079-4084-a36d-f8edd1c96d8a_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-917221 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-917221 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.24s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-pf9mc" [a5b953b2-5067-4e24-9998-c91fb25aeaf0] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005237275s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-nz6rn" [30faa17c-8d78-4def-851a-18c108a1cc90] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004573182s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-917221 addons disable yakd --alsologtostderr -v=1: (6.462906459s)
--- PASS: TestAddons/parallel/Yakd (11.47s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.12s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-917221
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-917221: (1m30.799154883s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-917221
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-917221
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-917221
--- PASS: TestAddons/StoppedEnableDisable (91.12s)

                                                
                                    
x
+
TestCertOptions (47.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-833776 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-833776 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (45.693847991s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-833776 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-833776 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-833776 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-833776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-833776
--- PASS: TestCertOptions (47.02s)

                                                
                                    
x
+
TestCertExpiration (264.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-038404 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-038404 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (58.006349068s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-038404 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-038404 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (25.394335122s)
helpers_test.go:175: Cleaning up "cert-expiration-038404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-038404
--- PASS: TestCertExpiration (264.23s)

                                                
                                    
x
+
TestForceSystemdFlag (61.93s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-821407 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-821407 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m0.690392174s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-821407 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-821407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-821407
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-821407: (1.030412775s)
--- PASS: TestForceSystemdFlag (61.93s)

                                                
                                    
x
+
TestForceSystemdEnv (86.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-007924 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-007924 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m25.247646405s)
helpers_test.go:175: Cleaning up "force-systemd-env-007924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-007924
--- PASS: TestForceSystemdEnv (86.16s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.68s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0120 13:52:13.412966 1927672 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 13:52:13.413162 1927672 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0120 13:52:13.453973 1927672 install.go:62] docker-machine-driver-kvm2: exit status 1
W0120 13:52:13.454411 1927672 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0120 13:52:13.454505 1927672 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate80917038/001/docker-machine-driver-kvm2
I0120 13:52:13.958447 1927672 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate80917038/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660] Decompressors:map[bz2:0xc000725aa0 gz:0xc000725aa8 tar:0xc000725a20 tar.bz2:0xc000725a50 tar.gz:0xc000725a70 tar.xz:0xc000725a80 tar.zst:0xc000725a90 tbz2:0xc000725a50 tgz:0xc000725a70 txz:0xc000725a80 tzst:0xc000725a90 xz:0xc000725ab0 zip:0xc000725ac0 zst:0xc000725ab8] Getters:map[file:0xc001b036b0 http:0xc0006934a0 https:0xc0006934f0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code
: 404. trying to get the common version
I0120 13:52:13.958519 1927672 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate80917038/001/docker-machine-driver-kvm2
I0120 13:52:15.614633 1927672 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 13:52:15.614731 1927672 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0120 13:52:15.647473 1927672 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0120 13:52:15.647515 1927672 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0120 13:52:15.647581 1927672 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0120 13:52:15.647611 1927672 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate80917038/002/docker-machine-driver-kvm2
I0120 13:52:15.811431 1927672 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate80917038/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660] Decompressors:map[bz2:0xc000725aa0 gz:0xc000725aa8 tar:0xc000725a20 tar.bz2:0xc000725a50 tar.gz:0xc000725a70 tar.xz:0xc000725a80 tar.zst:0xc000725a90 tbz2:0xc000725a50 tgz:0xc000725a70 txz:0xc000725a80 tzst:0xc000725a90 xz:0xc000725ab0 zip:0xc000725ac0 zst:0xc000725ab8] Getters:map[file:0xc001b5d7a0 http:0xc000881ef0 https:0xc0000285a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code
: 404. trying to get the common version
I0120 13:52:15.811483 1927672 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate80917038/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.68s)

                                                
                                    
x
+
TestErrorSpam/setup (42.35s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-975434 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-975434 --driver=kvm2  --container-runtime=crio
E0120 12:57:36.606127 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:36.612633 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:36.624066 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:36.645576 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:36.687114 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:36.768733 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:36.930354 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:37.252160 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:37.894289 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:39.176028 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:41.738990 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:46.860761 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 12:57:57.103285 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-975434 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-975434 --driver=kvm2  --container-runtime=crio: (42.351589367s)
--- PASS: TestErrorSpam/setup (42.35s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (5.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 stop: (2.326895652s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 stop
E0120 12:58:17.585050 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 stop: (1.92907225s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-975434 --log_dir /tmp/nospam-975434 stop: (1.032241407s)
--- PASS: TestErrorSpam/stop (5.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20242-1920423/.minikube/files/etc/test/nested/copy/1927672/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.63s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-038507 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0120 12:58:58.548362 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-038507 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m23.634488716s)
--- PASS: TestFunctional/serial/StartWithProxy (83.63s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0120 12:59:43.484495 1927672 config.go:182] Loaded profile config "functional-038507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-038507 --alsologtostderr -v=8
E0120 13:00:20.470794 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-038507 --alsologtostderr -v=8: (54.239583642s)
functional_test.go:663: soft start took 54.240294414s for "functional-038507" cluster.
I0120 13:00:37.724547 1927672 config.go:182] Loaded profile config "functional-038507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/SoftStart (54.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-038507 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-038507 cache add registry.k8s.io/pause:3.1: (1.160138118s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-038507 cache add registry.k8s.io/pause:3.3: (1.366073147s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-038507 cache add registry.k8s.io/pause:latest: (1.153393094s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-038507 /tmp/TestFunctionalserialCacheCmdcacheadd_local1216276144/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 cache add minikube-local-cache-test:functional-038507
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-038507 cache add minikube-local-cache-test:functional-038507: (1.106599935s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 cache delete minikube-local-cache-test:functional-038507
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-038507
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-038507 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (215.151201ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-038507 cache reload: (1.027960815s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 kubectl -- --context functional-038507 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-038507 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-038507 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-038507 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.267497802s)
functional_test.go:761: restart took 33.267675595s for "functional-038507" cluster.
I0120 13:01:18.665128 1927672 config.go:182] Loaded profile config "functional-038507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/ExtraConfig (33.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-038507 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-038507 logs: (1.567848167s)
--- PASS: TestFunctional/serial/LogsCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 logs --file /tmp/TestFunctionalserialLogsFileCmd1653817858/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-038507 logs --file /tmp/TestFunctionalserialLogsFileCmd1653817858/001/logs.txt: (1.5388397s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-038507 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-038507
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-038507: exit status 115 (290.546894ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.67:32107 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-038507 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-038507 config get cpus: exit status 14 (67.214737ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-038507 config get cpus: exit status 14 (63.64655ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-038507 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-038507 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1935724: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-038507 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
2025/01/20 13:02:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-038507 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (150.050944ms)

                                                
                                                
-- stdout --
	* [functional-038507] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:02:01.901385 1936278 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:02:01.901669 1936278 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:02:01.901680 1936278 out.go:358] Setting ErrFile to fd 2...
	I0120 13:02:01.901687 1936278 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:02:01.901919 1936278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 13:02:01.902547 1936278 out.go:352] Setting JSON to false
	I0120 13:02:01.903671 1936278 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":17068,"bootTime":1737361054,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 13:02:01.903830 1936278 start.go:139] virtualization: kvm guest
	I0120 13:02:01.906118 1936278 out.go:177] * [functional-038507] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 13:02:01.907713 1936278 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 13:02:01.907730 1936278 notify.go:220] Checking for updates...
	I0120 13:02:01.910403 1936278 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 13:02:01.911779 1936278 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 13:02:01.913433 1936278 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 13:02:01.914872 1936278 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 13:02:01.916432 1936278 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 13:02:01.918294 1936278 config.go:182] Loaded profile config "functional-038507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 13:02:01.918890 1936278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:02:01.918992 1936278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:02:01.936129 1936278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38803
	I0120 13:02:01.936544 1936278 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:02:01.937042 1936278 main.go:141] libmachine: Using API Version  1
	I0120 13:02:01.937069 1936278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:02:01.937345 1936278 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:02:01.937490 1936278 main.go:141] libmachine: (functional-038507) Calling .DriverName
	I0120 13:02:01.937707 1936278 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 13:02:01.938004 1936278 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:02:01.938057 1936278 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:02:01.953811 1936278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38695
	I0120 13:02:01.954343 1936278 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:02:01.955003 1936278 main.go:141] libmachine: Using API Version  1
	I0120 13:02:01.955036 1936278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:02:01.955452 1936278 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:02:01.955657 1936278 main.go:141] libmachine: (functional-038507) Calling .DriverName
	I0120 13:02:01.992045 1936278 out.go:177] * Using the kvm2 driver based on existing profile
	I0120 13:02:01.993506 1936278 start.go:297] selected driver: kvm2
	I0120 13:02:01.993521 1936278 start.go:901] validating driver "kvm2" against &{Name:functional-038507 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-038507 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:02:01.993669 1936278 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 13:02:01.996169 1936278 out.go:201] 
	W0120 13:02:01.997732 1936278 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0120 13:02:01.999001 1936278 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-038507 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-038507 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-038507 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (157.465976ms)

                                                
                                                
-- stdout --
	* [functional-038507] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:02:02.208708 1936334 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:02:02.208845 1936334 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:02:02.208855 1936334 out.go:358] Setting ErrFile to fd 2...
	I0120 13:02:02.208859 1936334 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:02:02.209190 1936334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 13:02:02.209777 1936334 out.go:352] Setting JSON to false
	I0120 13:02:02.210915 1936334 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":17068,"bootTime":1737361054,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 13:02:02.211039 1936334 start.go:139] virtualization: kvm guest
	I0120 13:02:02.213150 1936334 out.go:177] * [functional-038507] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0120 13:02:02.214655 1936334 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 13:02:02.214642 1936334 notify.go:220] Checking for updates...
	I0120 13:02:02.216217 1936334 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 13:02:02.217584 1936334 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 13:02:02.219132 1936334 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 13:02:02.220541 1936334 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 13:02:02.221757 1936334 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 13:02:02.223551 1936334 config.go:182] Loaded profile config "functional-038507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 13:02:02.223933 1936334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:02:02.223992 1936334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:02:02.240976 1936334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I0120 13:02:02.241397 1936334 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:02:02.241930 1936334 main.go:141] libmachine: Using API Version  1
	I0120 13:02:02.241965 1936334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:02:02.242333 1936334 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:02:02.242535 1936334 main.go:141] libmachine: (functional-038507) Calling .DriverName
	I0120 13:02:02.242802 1936334 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 13:02:02.243146 1936334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:02:02.243191 1936334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:02:02.260551 1936334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35493
	I0120 13:02:02.261093 1936334 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:02:02.261772 1936334 main.go:141] libmachine: Using API Version  1
	I0120 13:02:02.261820 1936334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:02:02.262222 1936334 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:02:02.262420 1936334 main.go:141] libmachine: (functional-038507) Calling .DriverName
	I0120 13:02:02.302072 1936334 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0120 13:02:02.303415 1936334 start.go:297] selected driver: kvm2
	I0120 13:02:02.303439 1936334 start.go:901] validating driver "kvm2" against &{Name:functional-038507 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-038507 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 13:02:02.303598 1936334 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 13:02:02.305531 1936334 out.go:201] 
	W0120 13:02:02.306835 1936334 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0120 13:02:02.308376 1936334 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-038507 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-038507 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-nzwf4" [3aea785c-4d81-49e9-b31c-52009d905272] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-nzwf4" [3aea785c-4d81-49e9-b31c-52009d905272] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.027775184s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.67:32211
functional_test.go:1675: http://192.168.39.67:32211: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-nzwf4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.67:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.67:32211
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8e17e4ec-9a3f-47fc-88c6-1f71ef2749c6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004555426s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-038507 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-038507 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-038507 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-038507 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [73e48e16-0c50-4a0d-89b4-ef74237c4a19] Pending
helpers_test.go:344: "sp-pod" [73e48e16-0c50-4a0d-89b4-ef74237c4a19] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [73e48e16-0c50-4a0d-89b4-ef74237c4a19] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.104401465s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-038507 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-038507 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-038507 delete -f testdata/storage-provisioner/pod.yaml: (2.977325993s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-038507 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9c80afc9-9bfd-46f3-a766-ba72883ff5db] Pending
helpers_test.go:344: "sp-pod" [9c80afc9-9bfd-46f3-a766-ba72883ff5db] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9c80afc9-9bfd-46f3-a766-ba72883ff5db] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.004257747s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-038507 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh -n functional-038507 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 cp functional-038507:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3822315453/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh -n functional-038507 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh -n functional-038507 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-038507 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-d4fsv" [0e295e2b-8a80-4d8d-a474-1cf85e82560b] Pending
helpers_test.go:344: "mysql-58ccfd96bb-d4fsv" [0e295e2b-8a80-4d8d-a474-1cf85e82560b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-d4fsv" [0e295e2b-8a80-4d8d-a474-1cf85e82560b] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.0145635s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-038507 exec mysql-58ccfd96bb-d4fsv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-038507 exec mysql-58ccfd96bb-d4fsv -- mysql -ppassword -e "show databases;": exit status 1 (293.455483ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0120 13:01:59.230497 1927672 retry.go:31] will retry after 1.048401168s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-038507 exec mysql-58ccfd96bb-d4fsv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-038507 exec mysql-58ccfd96bb-d4fsv -- mysql -ppassword -e "show databases;": exit status 1 (878.749045ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0120 13:02:01.158939 1927672 retry.go:31] will retry after 1.097854796s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-038507 exec mysql-58ccfd96bb-d4fsv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.74s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1927672/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "sudo cat /etc/test/nested/copy/1927672/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1927672.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "sudo cat /etc/ssl/certs/1927672.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1927672.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "sudo cat /usr/share/ca-certificates/1927672.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/19276722.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "sudo cat /etc/ssl/certs/19276722.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/19276722.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "sudo cat /usr/share/ca-certificates/19276722.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-038507 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-038507 ssh "sudo systemctl is-active docker": exit status 1 (242.924048ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-038507 ssh "sudo systemctl is-active containerd": exit status 1 (247.815456ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-038507 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.0
registry.k8s.io/kube-proxy:v1.32.0
registry.k8s.io/kube-controller-manager:v1.32.0
registry.k8s.io/kube-apiserver:v1.32.0
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-038507
localhost/kicbase/echo-server:functional-038507
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-038507 image ls --format short --alsologtostderr:
I0120 13:02:02.454324 1936385 out.go:345] Setting OutFile to fd 1 ...
I0120 13:02:02.454710 1936385 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:02:02.454725 1936385 out.go:358] Setting ErrFile to fd 2...
I0120 13:02:02.454732 1936385 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:02:02.455003 1936385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
I0120 13:02:02.455834 1936385 config.go:182] Loaded profile config "functional-038507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 13:02:02.455962 1936385 config.go:182] Loaded profile config "functional-038507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 13:02:02.456452 1936385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 13:02:02.456568 1936385 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 13:02:02.477076 1936385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37625
I0120 13:02:02.477696 1936385 main.go:141] libmachine: () Calling .GetVersion
I0120 13:02:02.478374 1936385 main.go:141] libmachine: Using API Version  1
I0120 13:02:02.478400 1936385 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 13:02:02.478840 1936385 main.go:141] libmachine: () Calling .GetMachineName
I0120 13:02:02.479055 1936385 main.go:141] libmachine: (functional-038507) Calling .GetState
I0120 13:02:02.483849 1936385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 13:02:02.483901 1936385 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 13:02:02.502079 1936385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41797
I0120 13:02:02.502640 1936385 main.go:141] libmachine: () Calling .GetVersion
I0120 13:02:02.503273 1936385 main.go:141] libmachine: Using API Version  1
I0120 13:02:02.503305 1936385 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 13:02:02.503742 1936385 main.go:141] libmachine: () Calling .GetMachineName
I0120 13:02:02.503976 1936385 main.go:141] libmachine: (functional-038507) Calling .DriverName
I0120 13:02:02.504226 1936385 ssh_runner.go:195] Run: systemctl --version
I0120 13:02:02.504260 1936385 main.go:141] libmachine: (functional-038507) Calling .GetSSHHostname
I0120 13:02:02.507895 1936385 main.go:141] libmachine: (functional-038507) DBG | domain functional-038507 has defined MAC address 52:54:00:ff:53:73 in network mk-functional-038507
I0120 13:02:02.508360 1936385 main.go:141] libmachine: (functional-038507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:53:73", ip: ""} in network mk-functional-038507: {Iface:virbr1 ExpiryTime:2025-01-20 13:58:35 +0000 UTC Type:0 Mac:52:54:00:ff:53:73 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:functional-038507 Clientid:01:52:54:00:ff:53:73}
I0120 13:02:02.508380 1936385 main.go:141] libmachine: (functional-038507) DBG | domain functional-038507 has defined IP address 192.168.39.67 and MAC address 52:54:00:ff:53:73 in network mk-functional-038507
I0120 13:02:02.508549 1936385 main.go:141] libmachine: (functional-038507) Calling .GetSSHPort
I0120 13:02:02.508747 1936385 main.go:141] libmachine: (functional-038507) Calling .GetSSHKeyPath
I0120 13:02:02.508873 1936385 main.go:141] libmachine: (functional-038507) Calling .GetSSHUsername
I0120 13:02:02.508996 1936385 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/functional-038507/id_rsa Username:docker}
I0120 13:02:02.645534 1936385 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 13:02:02.782144 1936385 main.go:141] libmachine: Making call to close driver server
I0120 13:02:02.782171 1936385 main.go:141] libmachine: (functional-038507) Calling .Close
I0120 13:02:02.782522 1936385 main.go:141] libmachine: (functional-038507) DBG | Closing plugin on server side
I0120 13:02:02.782547 1936385 main.go:141] libmachine: Successfully made call to close driver server
I0120 13:02:02.782556 1936385 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 13:02:02.782564 1936385 main.go:141] libmachine: Making call to close driver server
I0120 13:02:02.782572 1936385 main.go:141] libmachine: (functional-038507) Calling .Close
I0120 13:02:02.782825 1936385 main.go:141] libmachine: Successfully made call to close driver server
I0120 13:02:02.782841 1936385 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-038507 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 9bea9f2796e23 | 196MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-apiserver          | v1.32.0            | c2e17b8d0f4a3 | 98.1MB |
| registry.k8s.io/kube-controller-manager | v1.32.0            | 8cab3d2a8bd0f | 90.8MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/kicbase/echo-server           | functional-038507  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| localhost/minikube-local-cache-test     | functional-038507  | b0d74a0fed3df | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-scheduler          | v1.32.0            | a389e107f4ff1 | 70.6MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-proxy              | v1.32.0            | 040f9f8aac8cd | 95.3MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-038507 image ls --format table --alsologtostderr:
I0120 13:02:03.190820 1936540 out.go:345] Setting OutFile to fd 1 ...
I0120 13:02:03.191174 1936540 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:02:03.191193 1936540 out.go:358] Setting ErrFile to fd 2...
I0120 13:02:03.191200 1936540 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:02:03.191496 1936540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
I0120 13:02:03.193276 1936540 config.go:182] Loaded profile config "functional-038507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 13:02:03.193500 1936540 config.go:182] Loaded profile config "functional-038507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 13:02:03.194047 1936540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 13:02:03.194108 1936540 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 13:02:03.210314 1936540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35929
I0120 13:02:03.210822 1936540 main.go:141] libmachine: () Calling .GetVersion
I0120 13:02:03.211562 1936540 main.go:141] libmachine: Using API Version  1
I0120 13:02:03.211601 1936540 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 13:02:03.212000 1936540 main.go:141] libmachine: () Calling .GetMachineName
I0120 13:02:03.212295 1936540 main.go:141] libmachine: (functional-038507) Calling .GetState
I0120 13:02:03.214419 1936540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 13:02:03.214467 1936540 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 13:02:03.231069 1936540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43829
I0120 13:02:03.231608 1936540 main.go:141] libmachine: () Calling .GetVersion
I0120 13:02:03.232190 1936540 main.go:141] libmachine: Using API Version  1
I0120 13:02:03.232222 1936540 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 13:02:03.232628 1936540 main.go:141] libmachine: () Calling .GetMachineName
I0120 13:02:03.232869 1936540 main.go:141] libmachine: (functional-038507) Calling .DriverName
I0120 13:02:03.233106 1936540 ssh_runner.go:195] Run: systemctl --version
I0120 13:02:03.233144 1936540 main.go:141] libmachine: (functional-038507) Calling .GetSSHHostname
I0120 13:02:03.236118 1936540 main.go:141] libmachine: (functional-038507) DBG | domain functional-038507 has defined MAC address 52:54:00:ff:53:73 in network mk-functional-038507
I0120 13:02:03.236531 1936540 main.go:141] libmachine: (functional-038507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:53:73", ip: ""} in network mk-functional-038507: {Iface:virbr1 ExpiryTime:2025-01-20 13:58:35 +0000 UTC Type:0 Mac:52:54:00:ff:53:73 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:functional-038507 Clientid:01:52:54:00:ff:53:73}
I0120 13:02:03.236566 1936540 main.go:141] libmachine: (functional-038507) DBG | domain functional-038507 has defined IP address 192.168.39.67 and MAC address 52:54:00:ff:53:73 in network mk-functional-038507
I0120 13:02:03.236675 1936540 main.go:141] libmachine: (functional-038507) Calling .GetSSHPort
I0120 13:02:03.236843 1936540 main.go:141] libmachine: (functional-038507) Calling .GetSSHKeyPath
I0120 13:02:03.236988 1936540 main.go:141] libmachine: (functional-038507) Calling .GetSSHUsername
I0120 13:02:03.237132 1936540 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/functional-038507/id_rsa Username:docker}
I0120 13:02:03.375242 1936540 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 13:02:03.469176 1936540 main.go:141] libmachine: Making call to close driver server
I0120 13:02:03.469202 1936540 main.go:141] libmachine: (functional-038507) Calling .Close
I0120 13:02:03.469542 1936540 main.go:141] libmachine: Successfully made call to close driver server
I0120 13:02:03.469561 1936540 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 13:02:03.469572 1936540 main.go:141] libmachine: Making call to close driver server
I0120 13:02:03.469580 1936540 main.go:141] libmachine: (functional-038507) Calling .Close
I0120 13:02:03.469824 1936540 main.go:141] libmachine: Successfully made call to close driver server
I0120 13:02:03.469850 1936540 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 13:02:03.469881 1936540 main.go:141] libmachine: (functional-038507) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-038507 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-038507"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},
{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a","docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9"],"repoTags":["docker.io/library/nginx:latest"],"size":"195872148"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags
":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0feb9730f9de32b0b1c5cc0eb756c1f4abf2246f1ac8d3fe75285bfee282d0ac","registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.0"],"size":"90789190"},{"id":"040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08","repoDigests":["registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4","registry.k8s.io/kube-proxy@sha256:8db2ca0e784c2188157f005aac67afbbb70d3d68747eea23765bef83917a5a31"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.0"],"size":"95270297"},{"id":"a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1ce9d9222572dc72760ba18589a048b3cf32163dac0708522f3b991974fafdec","regist
ry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.0"],"size":"70649156"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c
111741ab4ad05e7c5d37539aaf7dc3b","registry.k8s.io/kube-apiserver@sha256:fe1eb8fc870b01f4b1f470d2b179a1d1a86d6e2fa174bd10c01bf45bc5b03200"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.0"],"size":"98051552"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry
.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"b0d74a0fed3dfec070e792b49eba9bd3ab2ccde8eda7268f937f350199d3d933","repoDigests":["localhost/minikube-local-cache-test@sha256:3bbc52c74429
2bee51cf8c78300ee2c4862ef10ea780d5ee8d0de2a3da4a0a96"],"repoTags":["localhost/minikube-local-cache-test:functional-038507"],"size":"3328"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-038507 image ls --format json --alsologtostderr:
I0120 13:02:02.848320 1936458 out.go:345] Setting OutFile to fd 1 ...
I0120 13:02:02.848426 1936458 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:02:02.848432 1936458 out.go:358] Setting ErrFile to fd 2...
I0120 13:02:02.848437 1936458 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:02:02.848691 1936458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
I0120 13:02:02.849387 1936458 config.go:182] Loaded profile config "functional-038507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 13:02:02.849525 1936458 config.go:182] Loaded profile config "functional-038507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 13:02:02.853015 1936458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 13:02:02.853089 1936458 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 13:02:02.870678 1936458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45373
I0120 13:02:02.871248 1936458 main.go:141] libmachine: () Calling .GetVersion
I0120 13:02:02.871963 1936458 main.go:141] libmachine: Using API Version  1
I0120 13:02:02.871995 1936458 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 13:02:02.872333 1936458 main.go:141] libmachine: () Calling .GetMachineName
I0120 13:02:02.872603 1936458 main.go:141] libmachine: (functional-038507) Calling .GetState
I0120 13:02:02.874772 1936458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 13:02:02.874830 1936458 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 13:02:02.891215 1936458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
I0120 13:02:02.891678 1936458 main.go:141] libmachine: () Calling .GetVersion
I0120 13:02:02.892348 1936458 main.go:141] libmachine: Using API Version  1
I0120 13:02:02.892381 1936458 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 13:02:02.892743 1936458 main.go:141] libmachine: () Calling .GetMachineName
I0120 13:02:02.892934 1936458 main.go:141] libmachine: (functional-038507) Calling .DriverName
I0120 13:02:02.893163 1936458 ssh_runner.go:195] Run: systemctl --version
I0120 13:02:02.893196 1936458 main.go:141] libmachine: (functional-038507) Calling .GetSSHHostname
I0120 13:02:02.896654 1936458 main.go:141] libmachine: (functional-038507) DBG | domain functional-038507 has defined MAC address 52:54:00:ff:53:73 in network mk-functional-038507
I0120 13:02:02.897044 1936458 main.go:141] libmachine: (functional-038507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:53:73", ip: ""} in network mk-functional-038507: {Iface:virbr1 ExpiryTime:2025-01-20 13:58:35 +0000 UTC Type:0 Mac:52:54:00:ff:53:73 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:functional-038507 Clientid:01:52:54:00:ff:53:73}
I0120 13:02:02.897070 1936458 main.go:141] libmachine: (functional-038507) DBG | domain functional-038507 has defined IP address 192.168.39.67 and MAC address 52:54:00:ff:53:73 in network mk-functional-038507
I0120 13:02:02.897253 1936458 main.go:141] libmachine: (functional-038507) Calling .GetSSHPort
I0120 13:02:02.897461 1936458 main.go:141] libmachine: (functional-038507) Calling .GetSSHKeyPath
I0120 13:02:02.897628 1936458 main.go:141] libmachine: (functional-038507) Calling .GetSSHUsername
I0120 13:02:02.897788 1936458 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/functional-038507/id_rsa Username:docker}
I0120 13:02:03.008838 1936458 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 13:02:03.125149 1936458 main.go:141] libmachine: Making call to close driver server
I0120 13:02:03.125171 1936458 main.go:141] libmachine: (functional-038507) Calling .Close
I0120 13:02:03.125564 1936458 main.go:141] libmachine: Successfully made call to close driver server
I0120 13:02:03.125600 1936458 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 13:02:03.125614 1936458 main.go:141] libmachine: Making call to close driver server
I0120 13:02:03.125625 1936458 main.go:141] libmachine: (functional-038507) Calling .Close
I0120 13:02:03.125883 1936458 main.go:141] libmachine: (functional-038507) DBG | Closing plugin on server side
I0120 13:02:03.125915 1936458 main.go:141] libmachine: Successfully made call to close driver server
I0120 13:02:03.125921 1936458 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-038507 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
- docker.io/library/nginx@sha256:2426c815287ed75a3a33dd28512eba4f0f783946844209ccf3fa8990817a4eb9
repoTags:
- docker.io/library/nginx:latest
size: "195872148"
- id: 8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0feb9730f9de32b0b1c5cc0eb756c1f4abf2246f1ac8d3fe75285bfee282d0ac
- registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.0
size: "90789190"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: b0d74a0fed3dfec070e792b49eba9bd3ab2ccde8eda7268f937f350199d3d933
repoDigests:
- localhost/minikube-local-cache-test@sha256:3bbc52c744292bee51cf8c78300ee2c4862ef10ea780d5ee8d0de2a3da4a0a96
repoTags:
- localhost/minikube-local-cache-test:functional-038507
size: "3328"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b
- registry.k8s.io/kube-apiserver@sha256:fe1eb8fc870b01f4b1f470d2b179a1d1a86d6e2fa174bd10c01bf45bc5b03200
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.0
size: "98051552"
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-038507
size: "4943877"
- id: a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1ce9d9222572dc72760ba18589a048b3cf32163dac0708522f3b991974fafdec
- registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.0
size: "70649156"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4
- registry.k8s.io/kube-proxy@sha256:8db2ca0e784c2188157f005aac67afbbb70d3d68747eea23765bef83917a5a31
repoTags:
- registry.k8s.io/kube-proxy:v1.32.0
size: "95270297"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-038507 image ls --format yaml --alsologtostderr:
I0120 13:02:02.489917 1936403 out.go:345] Setting OutFile to fd 1 ...
I0120 13:02:02.490160 1936403 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:02:02.490172 1936403 out.go:358] Setting ErrFile to fd 2...
I0120 13:02:02.490177 1936403 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:02:02.490397 1936403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
I0120 13:02:02.491029 1936403 config.go:182] Loaded profile config "functional-038507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 13:02:02.491138 1936403 config.go:182] Loaded profile config "functional-038507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 13:02:02.491534 1936403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 13:02:02.491590 1936403 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 13:02:02.507626 1936403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36827
I0120 13:02:02.508134 1936403 main.go:141] libmachine: () Calling .GetVersion
I0120 13:02:02.508899 1936403 main.go:141] libmachine: Using API Version  1
I0120 13:02:02.508920 1936403 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 13:02:02.509229 1936403 main.go:141] libmachine: () Calling .GetMachineName
I0120 13:02:02.509439 1936403 main.go:141] libmachine: (functional-038507) Calling .GetState
I0120 13:02:02.511407 1936403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 13:02:02.511458 1936403 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 13:02:02.527570 1936403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43343
I0120 13:02:02.528109 1936403 main.go:141] libmachine: () Calling .GetVersion
I0120 13:02:02.528633 1936403 main.go:141] libmachine: Using API Version  1
I0120 13:02:02.528662 1936403 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 13:02:02.529034 1936403 main.go:141] libmachine: () Calling .GetMachineName
I0120 13:02:02.529269 1936403 main.go:141] libmachine: (functional-038507) Calling .DriverName
I0120 13:02:02.529490 1936403 ssh_runner.go:195] Run: systemctl --version
I0120 13:02:02.529529 1936403 main.go:141] libmachine: (functional-038507) Calling .GetSSHHostname
I0120 13:02:02.532723 1936403 main.go:141] libmachine: (functional-038507) DBG | domain functional-038507 has defined MAC address 52:54:00:ff:53:73 in network mk-functional-038507
I0120 13:02:02.533134 1936403 main.go:141] libmachine: (functional-038507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:53:73", ip: ""} in network mk-functional-038507: {Iface:virbr1 ExpiryTime:2025-01-20 13:58:35 +0000 UTC Type:0 Mac:52:54:00:ff:53:73 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:functional-038507 Clientid:01:52:54:00:ff:53:73}
I0120 13:02:02.533168 1936403 main.go:141] libmachine: (functional-038507) DBG | domain functional-038507 has defined IP address 192.168.39.67 and MAC address 52:54:00:ff:53:73 in network mk-functional-038507
I0120 13:02:02.533332 1936403 main.go:141] libmachine: (functional-038507) Calling .GetSSHPort
I0120 13:02:02.533546 1936403 main.go:141] libmachine: (functional-038507) Calling .GetSSHKeyPath
I0120 13:02:02.533689 1936403 main.go:141] libmachine: (functional-038507) Calling .GetSSHUsername
I0120 13:02:02.533838 1936403 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/functional-038507/id_rsa Username:docker}
I0120 13:02:02.658938 1936403 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 13:02:02.767939 1936403 main.go:141] libmachine: Making call to close driver server
I0120 13:02:02.767956 1936403 main.go:141] libmachine: (functional-038507) Calling .Close
I0120 13:02:02.768342 1936403 main.go:141] libmachine: (functional-038507) DBG | Closing plugin on server side
I0120 13:02:02.768390 1936403 main.go:141] libmachine: Successfully made call to close driver server
I0120 13:02:02.768400 1936403 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 13:02:02.768416 1936403 main.go:141] libmachine: Making call to close driver server
I0120 13:02:02.768425 1936403 main.go:141] libmachine: (functional-038507) Calling .Close
I0120 13:02:02.768685 1936403 main.go:141] libmachine: Successfully made call to close driver server
I0120 13:02:02.768713 1936403 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 13:02:02.768732 1936403 main.go:141] libmachine: (functional-038507) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-038507 ssh pgrep buildkitd: exit status 1 (283.669119ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image build -t localhost/my-image:functional-038507 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-038507 image build -t localhost/my-image:functional-038507 testdata/build --alsologtostderr: (2.426949067s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-038507 image build -t localhost/my-image:functional-038507 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 40184647d9d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-038507
--> fb80233bc95
Successfully tagged localhost/my-image:functional-038507
fb80233bc952ea520c4659141a88e34ed24e731439da38f4e5f4803442e1c751
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-038507 image build -t localhost/my-image:functional-038507 testdata/build --alsologtostderr:
I0120 13:02:03.111309 1936521 out.go:345] Setting OutFile to fd 1 ...
I0120 13:02:03.111607 1936521 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:02:03.111620 1936521 out.go:358] Setting ErrFile to fd 2...
I0120 13:02:03.111624 1936521 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 13:02:03.111804 1936521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
I0120 13:02:03.112463 1936521 config.go:182] Loaded profile config "functional-038507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 13:02:03.113117 1936521 config.go:182] Loaded profile config "functional-038507": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 13:02:03.114322 1936521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 13:02:03.114410 1936521 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 13:02:03.131765 1936521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
I0120 13:02:03.132262 1936521 main.go:141] libmachine: () Calling .GetVersion
I0120 13:02:03.132818 1936521 main.go:141] libmachine: Using API Version  1
I0120 13:02:03.132842 1936521 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 13:02:03.133204 1936521 main.go:141] libmachine: () Calling .GetMachineName
I0120 13:02:03.133375 1936521 main.go:141] libmachine: (functional-038507) Calling .GetState
I0120 13:02:03.135389 1936521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 13:02:03.135464 1936521 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 13:02:03.156322 1936521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34437
I0120 13:02:03.157830 1936521 main.go:141] libmachine: () Calling .GetVersion
I0120 13:02:03.158519 1936521 main.go:141] libmachine: Using API Version  1
I0120 13:02:03.158550 1936521 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 13:02:03.158955 1936521 main.go:141] libmachine: () Calling .GetMachineName
I0120 13:02:03.159192 1936521 main.go:141] libmachine: (functional-038507) Calling .DriverName
I0120 13:02:03.159415 1936521 ssh_runner.go:195] Run: systemctl --version
I0120 13:02:03.159452 1936521 main.go:141] libmachine: (functional-038507) Calling .GetSSHHostname
I0120 13:02:03.162746 1936521 main.go:141] libmachine: (functional-038507) DBG | domain functional-038507 has defined MAC address 52:54:00:ff:53:73 in network mk-functional-038507
I0120 13:02:03.163191 1936521 main.go:141] libmachine: (functional-038507) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ff:53:73", ip: ""} in network mk-functional-038507: {Iface:virbr1 ExpiryTime:2025-01-20 13:58:35 +0000 UTC Type:0 Mac:52:54:00:ff:53:73 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:functional-038507 Clientid:01:52:54:00:ff:53:73}
I0120 13:02:03.163319 1936521 main.go:141] libmachine: (functional-038507) DBG | domain functional-038507 has defined IP address 192.168.39.67 and MAC address 52:54:00:ff:53:73 in network mk-functional-038507
I0120 13:02:03.163650 1936521 main.go:141] libmachine: (functional-038507) Calling .GetSSHPort
I0120 13:02:03.163851 1936521 main.go:141] libmachine: (functional-038507) Calling .GetSSHKeyPath
I0120 13:02:03.163994 1936521 main.go:141] libmachine: (functional-038507) Calling .GetSSHUsername
I0120 13:02:03.164180 1936521 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/functional-038507/id_rsa Username:docker}
I0120 13:02:03.286202 1936521 build_images.go:161] Building image from path: /tmp/build.3425689178.tar
I0120 13:02:03.286298 1936521 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0120 13:02:03.305282 1936521 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3425689178.tar
I0120 13:02:03.320389 1936521 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3425689178.tar: stat -c "%s %y" /var/lib/minikube/build/build.3425689178.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3425689178.tar': No such file or directory
I0120 13:02:03.320464 1936521 ssh_runner.go:362] scp /tmp/build.3425689178.tar --> /var/lib/minikube/build/build.3425689178.tar (3072 bytes)
I0120 13:02:03.379685 1936521 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3425689178
I0120 13:02:03.400108 1936521 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3425689178 -xf /var/lib/minikube/build/build.3425689178.tar
I0120 13:02:03.417361 1936521 crio.go:315] Building image: /var/lib/minikube/build/build.3425689178
I0120 13:02:03.417436 1936521 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-038507 /var/lib/minikube/build/build.3425689178 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0120 13:02:05.457993 1936521 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-038507 /var/lib/minikube/build/build.3425689178 --cgroup-manager=cgroupfs: (2.040519638s)
I0120 13:02:05.458084 1936521 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3425689178
I0120 13:02:05.470210 1936521 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3425689178.tar
I0120 13:02:05.481107 1936521 build_images.go:217] Built localhost/my-image:functional-038507 from /tmp/build.3425689178.tar
I0120 13:02:05.481147 1936521 build_images.go:133] succeeded building to: functional-038507
I0120 13:02:05.481152 1936521 build_images.go:134] failed building to: 
I0120 13:02:05.481184 1936521 main.go:141] libmachine: Making call to close driver server
I0120 13:02:05.481201 1936521 main.go:141] libmachine: (functional-038507) Calling .Close
I0120 13:02:05.481511 1936521 main.go:141] libmachine: Successfully made call to close driver server
I0120 13:02:05.481535 1936521 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 13:02:05.481549 1936521 main.go:141] libmachine: Making call to close driver server
I0120 13:02:05.481557 1936521 main.go:141] libmachine: (functional-038507) Calling .Close
I0120 13:02:05.481599 1936521 main.go:141] libmachine: (functional-038507) DBG | Closing plugin on server side
I0120 13:02:05.481820 1936521 main.go:141] libmachine: Successfully made call to close driver server
I0120 13:02:05.481839 1936521 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-038507
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "343.677942ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "58.038388ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image load --daemon kicbase/echo-server:functional-038507 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-038507 image load --daemon kicbase/echo-server:functional-038507 --alsologtostderr: (1.776226728s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image ls
functional_test.go:451: (dbg) Done: out/minikube-linux-amd64 -p functional-038507 image ls: (1.026849913s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "317.285263ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.50641ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-038507 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-038507 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-99h8h" [2522b743-5f08-430e-b232-8e39d3076771] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-99h8h" [2522b743-5f08-430e-b232-8e39d3076771] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.037658899s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image load --daemon kicbase/echo-server:functional-038507 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-038507 image load --daemon kicbase/echo-server:functional-038507 --alsologtostderr: (1.055841789s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-038507
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image load --daemon kicbase/echo-server:functional-038507 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image save kicbase/echo-server:functional-038507 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image rm kicbase/echo-server:functional-038507 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-038507
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 image save --daemon kicbase/echo-server:functional-038507 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-038507
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-038507 /tmp/TestFunctionalparallelMountCmdany-port2862102080/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737378098320325553" to /tmp/TestFunctionalparallelMountCmdany-port2862102080/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737378098320325553" to /tmp/TestFunctionalparallelMountCmdany-port2862102080/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737378098320325553" to /tmp/TestFunctionalparallelMountCmdany-port2862102080/001/test-1737378098320325553
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-038507 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (254.28929ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 13:01:38.575026 1927672 retry.go:31] will retry after 493.04956ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 20 13:01 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 20 13:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 20 13:01 test-1737378098320325553
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh cat /mount-9p/test-1737378098320325553
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-038507 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [596e76c9-803c-4ece-b0a1-88e9285e1918] Pending
helpers_test.go:344: "busybox-mount" [596e76c9-803c-4ece-b0a1-88e9285e1918] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [596e76c9-803c-4ece-b0a1-88e9285e1918] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [596e76c9-803c-4ece-b0a1-88e9285e1918] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 17.004583289s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-038507 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-038507 /tmp/TestFunctionalparallelMountCmdany-port2862102080/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 service list -o json
functional_test.go:1494: Took "570.968615ms" to run "out/minikube-linux-amd64 -p functional-038507 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.67:31644
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.67:31644
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-038507 /tmp/TestFunctionalparallelMountCmdspecific-port454231379/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-038507 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.586352ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 13:01:58.463098 1927672 retry.go:31] will retry after 599.924759ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-038507 /tmp/TestFunctionalparallelMountCmdspecific-port454231379/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-038507 ssh "sudo umount -f /mount-9p": exit status 1 (253.941829ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-038507 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-038507 /tmp/TestFunctionalparallelMountCmdspecific-port454231379/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-038507 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1708478757/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-038507 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1708478757/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-038507 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1708478757/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-038507 ssh "findmnt -T" /mount1: exit status 1 (294.106372ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 13:02:00.546837 1927672 retry.go:31] will retry after 613.987533ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-038507 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-038507 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-038507 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1708478757/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-038507 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1708478757/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-038507 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1708478757/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-038507
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-038507
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-038507
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-794751 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0120 13:02:36.597449 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:03:04.312825 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-794751 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m19.722686454s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (200.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-794751 -- rollout status deployment/busybox: (3.866628709s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- exec busybox-58667487b6-5hvf2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- exec busybox-58667487b6-qjmk5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- exec busybox-58667487b6-tggjr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- exec busybox-58667487b6-5hvf2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- exec busybox-58667487b6-qjmk5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- exec busybox-58667487b6-tggjr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- exec busybox-58667487b6-5hvf2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- exec busybox-58667487b6-qjmk5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- exec busybox-58667487b6-tggjr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- exec busybox-58667487b6-5hvf2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- exec busybox-58667487b6-5hvf2 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- exec busybox-58667487b6-qjmk5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- exec busybox-58667487b6-qjmk5 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- exec busybox-58667487b6-tggjr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-794751 -- exec busybox-58667487b6-tggjr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-794751 -v=7 --alsologtostderr
E0120 13:06:26.556619 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:06:26.563191 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:06:26.574758 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:06:26.596314 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:06:26.637789 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:06:26.719307 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:06:26.880930 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:06:27.202746 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:06:27.844689 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:06:29.127042 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:06:31.688636 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:06:36.810757 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-794751 -v=7 --alsologtostderr: (58.08060788s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-794751 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp testdata/cp-test.txt ha-794751:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp ha-794751:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1330323670/001/cp-test_ha-794751.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp ha-794751:/home/docker/cp-test.txt ha-794751-m02:/home/docker/cp-test_ha-794751_ha-794751-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m02 "sudo cat /home/docker/cp-test_ha-794751_ha-794751-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp ha-794751:/home/docker/cp-test.txt ha-794751-m03:/home/docker/cp-test_ha-794751_ha-794751-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m03 "sudo cat /home/docker/cp-test_ha-794751_ha-794751-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp ha-794751:/home/docker/cp-test.txt ha-794751-m04:/home/docker/cp-test_ha-794751_ha-794751-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m04 "sudo cat /home/docker/cp-test_ha-794751_ha-794751-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp testdata/cp-test.txt ha-794751-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp ha-794751-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1330323670/001/cp-test_ha-794751-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp ha-794751-m02:/home/docker/cp-test.txt ha-794751:/home/docker/cp-test_ha-794751-m02_ha-794751.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751 "sudo cat /home/docker/cp-test_ha-794751-m02_ha-794751.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp ha-794751-m02:/home/docker/cp-test.txt ha-794751-m03:/home/docker/cp-test_ha-794751-m02_ha-794751-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m03 "sudo cat /home/docker/cp-test_ha-794751-m02_ha-794751-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp ha-794751-m02:/home/docker/cp-test.txt ha-794751-m04:/home/docker/cp-test_ha-794751-m02_ha-794751-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m02 "sudo cat /home/docker/cp-test.txt"
E0120 13:06:47.052167 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m04 "sudo cat /home/docker/cp-test_ha-794751-m02_ha-794751-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp testdata/cp-test.txt ha-794751-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp ha-794751-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1330323670/001/cp-test_ha-794751-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp ha-794751-m03:/home/docker/cp-test.txt ha-794751:/home/docker/cp-test_ha-794751-m03_ha-794751.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751 "sudo cat /home/docker/cp-test_ha-794751-m03_ha-794751.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp ha-794751-m03:/home/docker/cp-test.txt ha-794751-m02:/home/docker/cp-test_ha-794751-m03_ha-794751-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m02 "sudo cat /home/docker/cp-test_ha-794751-m03_ha-794751-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp ha-794751-m03:/home/docker/cp-test.txt ha-794751-m04:/home/docker/cp-test_ha-794751-m03_ha-794751-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m04 "sudo cat /home/docker/cp-test_ha-794751-m03_ha-794751-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp testdata/cp-test.txt ha-794751-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp ha-794751-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1330323670/001/cp-test_ha-794751-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp ha-794751-m04:/home/docker/cp-test.txt ha-794751:/home/docker/cp-test_ha-794751-m04_ha-794751.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751 "sudo cat /home/docker/cp-test_ha-794751-m04_ha-794751.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp ha-794751-m04:/home/docker/cp-test.txt ha-794751-m02:/home/docker/cp-test_ha-794751-m04_ha-794751-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m02 "sudo cat /home/docker/cp-test_ha-794751-m04_ha-794751-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 cp ha-794751-m04:/home/docker/cp-test.txt ha-794751-m03:/home/docker/cp-test_ha-794751-m04_ha-794751-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 ssh -n ha-794751-m03 "sudo cat /home/docker/cp-test_ha-794751-m04_ha-794751-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 node stop m02 -v=7 --alsologtostderr
E0120 13:07:07.534531 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:07:36.597737 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:07:48.496654 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-794751 node stop m02 -v=7 --alsologtostderr: (1m31.031488977s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-794751 status -v=7 --alsologtostderr: exit status 7 (702.776861ms)

                                                
                                                
-- stdout --
	ha-794751
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794751-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-794751-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-794751-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:08:24.966495 1941286 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:08:24.966600 1941286 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:08:24.966622 1941286 out.go:358] Setting ErrFile to fd 2...
	I0120 13:08:24.966628 1941286 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:08:24.966841 1941286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 13:08:24.967042 1941286 out.go:352] Setting JSON to false
	I0120 13:08:24.967075 1941286 mustload.go:65] Loading cluster: ha-794751
	I0120 13:08:24.967123 1941286 notify.go:220] Checking for updates...
	I0120 13:08:24.967651 1941286 config.go:182] Loaded profile config "ha-794751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 13:08:24.967682 1941286 status.go:174] checking status of ha-794751 ...
	I0120 13:08:24.968222 1941286 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:08:24.968266 1941286 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:08:24.987965 1941286 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39595
	I0120 13:08:24.988518 1941286 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:08:24.989235 1941286 main.go:141] libmachine: Using API Version  1
	I0120 13:08:24.989267 1941286 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:08:24.989661 1941286 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:08:24.989893 1941286 main.go:141] libmachine: (ha-794751) Calling .GetState
	I0120 13:08:24.991940 1941286 status.go:371] ha-794751 host status = "Running" (err=<nil>)
	I0120 13:08:24.991963 1941286 host.go:66] Checking if "ha-794751" exists ...
	I0120 13:08:24.992294 1941286 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:08:24.992340 1941286 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:08:25.009080 1941286 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46229
	I0120 13:08:25.009584 1941286 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:08:25.010158 1941286 main.go:141] libmachine: Using API Version  1
	I0120 13:08:25.010192 1941286 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:08:25.010555 1941286 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:08:25.010791 1941286 main.go:141] libmachine: (ha-794751) Calling .GetIP
	I0120 13:08:25.013904 1941286 main.go:141] libmachine: (ha-794751) DBG | domain ha-794751 has defined MAC address 52:54:00:3d:18:fd in network mk-ha-794751
	I0120 13:08:25.014536 1941286 main.go:141] libmachine: (ha-794751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:18:fd", ip: ""} in network mk-ha-794751: {Iface:virbr1 ExpiryTime:2025-01-20 14:02:27 +0000 UTC Type:0 Mac:52:54:00:3d:18:fd Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-794751 Clientid:01:52:54:00:3d:18:fd}
	I0120 13:08:25.014581 1941286 main.go:141] libmachine: (ha-794751) DBG | domain ha-794751 has defined IP address 192.168.39.206 and MAC address 52:54:00:3d:18:fd in network mk-ha-794751
	I0120 13:08:25.014723 1941286 host.go:66] Checking if "ha-794751" exists ...
	I0120 13:08:25.015096 1941286 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:08:25.015143 1941286 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:08:25.035655 1941286 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33893
	I0120 13:08:25.036146 1941286 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:08:25.036737 1941286 main.go:141] libmachine: Using API Version  1
	I0120 13:08:25.036772 1941286 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:08:25.037198 1941286 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:08:25.037416 1941286 main.go:141] libmachine: (ha-794751) Calling .DriverName
	I0120 13:08:25.037661 1941286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 13:08:25.037711 1941286 main.go:141] libmachine: (ha-794751) Calling .GetSSHHostname
	I0120 13:08:25.041377 1941286 main.go:141] libmachine: (ha-794751) DBG | domain ha-794751 has defined MAC address 52:54:00:3d:18:fd in network mk-ha-794751
	I0120 13:08:25.041914 1941286 main.go:141] libmachine: (ha-794751) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:18:fd", ip: ""} in network mk-ha-794751: {Iface:virbr1 ExpiryTime:2025-01-20 14:02:27 +0000 UTC Type:0 Mac:52:54:00:3d:18:fd Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:ha-794751 Clientid:01:52:54:00:3d:18:fd}
	I0120 13:08:25.041947 1941286 main.go:141] libmachine: (ha-794751) DBG | domain ha-794751 has defined IP address 192.168.39.206 and MAC address 52:54:00:3d:18:fd in network mk-ha-794751
	I0120 13:08:25.042042 1941286 main.go:141] libmachine: (ha-794751) Calling .GetSSHPort
	I0120 13:08:25.042286 1941286 main.go:141] libmachine: (ha-794751) Calling .GetSSHKeyPath
	I0120 13:08:25.042433 1941286 main.go:141] libmachine: (ha-794751) Calling .GetSSHUsername
	I0120 13:08:25.042572 1941286 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/ha-794751/id_rsa Username:docker}
	I0120 13:08:25.134197 1941286 ssh_runner.go:195] Run: systemctl --version
	I0120 13:08:25.141793 1941286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:08:25.159593 1941286 kubeconfig.go:125] found "ha-794751" server: "https://192.168.39.254:8443"
	I0120 13:08:25.159637 1941286 api_server.go:166] Checking apiserver status ...
	I0120 13:08:25.159682 1941286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 13:08:25.180359 1941286 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1153/cgroup
	W0120 13:08:25.193011 1941286 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1153/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0120 13:08:25.193088 1941286 ssh_runner.go:195] Run: ls
	I0120 13:08:25.197985 1941286 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0120 13:08:25.205094 1941286 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0120 13:08:25.205143 1941286 status.go:463] ha-794751 apiserver status = Running (err=<nil>)
	I0120 13:08:25.205154 1941286 status.go:176] ha-794751 status: &{Name:ha-794751 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:08:25.205173 1941286 status.go:174] checking status of ha-794751-m02 ...
	I0120 13:08:25.205603 1941286 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:08:25.205668 1941286 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:08:25.222682 1941286 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34129
	I0120 13:08:25.223148 1941286 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:08:25.223652 1941286 main.go:141] libmachine: Using API Version  1
	I0120 13:08:25.223677 1941286 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:08:25.224060 1941286 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:08:25.224272 1941286 main.go:141] libmachine: (ha-794751-m02) Calling .GetState
	I0120 13:08:25.226108 1941286 status.go:371] ha-794751-m02 host status = "Stopped" (err=<nil>)
	I0120 13:08:25.226124 1941286 status.go:384] host is not running, skipping remaining checks
	I0120 13:08:25.226135 1941286 status.go:176] ha-794751-m02 status: &{Name:ha-794751-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:08:25.226155 1941286 status.go:174] checking status of ha-794751-m03 ...
	I0120 13:08:25.226533 1941286 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:08:25.226590 1941286 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:08:25.244451 1941286 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
	I0120 13:08:25.244981 1941286 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:08:25.245533 1941286 main.go:141] libmachine: Using API Version  1
	I0120 13:08:25.245566 1941286 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:08:25.245904 1941286 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:08:25.246124 1941286 main.go:141] libmachine: (ha-794751-m03) Calling .GetState
	I0120 13:08:25.248178 1941286 status.go:371] ha-794751-m03 host status = "Running" (err=<nil>)
	I0120 13:08:25.248206 1941286 host.go:66] Checking if "ha-794751-m03" exists ...
	I0120 13:08:25.248616 1941286 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:08:25.248668 1941286 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:08:25.265444 1941286 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37721
	I0120 13:08:25.265953 1941286 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:08:25.266547 1941286 main.go:141] libmachine: Using API Version  1
	I0120 13:08:25.266582 1941286 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:08:25.266986 1941286 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:08:25.267221 1941286 main.go:141] libmachine: (ha-794751-m03) Calling .GetIP
	I0120 13:08:25.270438 1941286 main.go:141] libmachine: (ha-794751-m03) DBG | domain ha-794751-m03 has defined MAC address 52:54:00:5d:78:d8 in network mk-ha-794751
	I0120 13:08:25.270935 1941286 main.go:141] libmachine: (ha-794751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:78:d8", ip: ""} in network mk-ha-794751: {Iface:virbr1 ExpiryTime:2025-01-20 14:04:28 +0000 UTC Type:0 Mac:52:54:00:5d:78:d8 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-794751-m03 Clientid:01:52:54:00:5d:78:d8}
	I0120 13:08:25.270984 1941286 main.go:141] libmachine: (ha-794751-m03) DBG | domain ha-794751-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:5d:78:d8 in network mk-ha-794751
	I0120 13:08:25.271118 1941286 host.go:66] Checking if "ha-794751-m03" exists ...
	I0120 13:08:25.271462 1941286 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:08:25.271506 1941286 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:08:25.289257 1941286 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38715
	I0120 13:08:25.289732 1941286 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:08:25.290325 1941286 main.go:141] libmachine: Using API Version  1
	I0120 13:08:25.290347 1941286 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:08:25.290648 1941286 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:08:25.290873 1941286 main.go:141] libmachine: (ha-794751-m03) Calling .DriverName
	I0120 13:08:25.291061 1941286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 13:08:25.291086 1941286 main.go:141] libmachine: (ha-794751-m03) Calling .GetSSHHostname
	I0120 13:08:25.294097 1941286 main.go:141] libmachine: (ha-794751-m03) DBG | domain ha-794751-m03 has defined MAC address 52:54:00:5d:78:d8 in network mk-ha-794751
	I0120 13:08:25.294663 1941286 main.go:141] libmachine: (ha-794751-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:78:d8", ip: ""} in network mk-ha-794751: {Iface:virbr1 ExpiryTime:2025-01-20 14:04:28 +0000 UTC Type:0 Mac:52:54:00:5d:78:d8 Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:ha-794751-m03 Clientid:01:52:54:00:5d:78:d8}
	I0120 13:08:25.294697 1941286 main.go:141] libmachine: (ha-794751-m03) DBG | domain ha-794751-m03 has defined IP address 192.168.39.193 and MAC address 52:54:00:5d:78:d8 in network mk-ha-794751
	I0120 13:08:25.294975 1941286 main.go:141] libmachine: (ha-794751-m03) Calling .GetSSHPort
	I0120 13:08:25.295163 1941286 main.go:141] libmachine: (ha-794751-m03) Calling .GetSSHKeyPath
	I0120 13:08:25.295302 1941286 main.go:141] libmachine: (ha-794751-m03) Calling .GetSSHUsername
	I0120 13:08:25.295402 1941286 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/ha-794751-m03/id_rsa Username:docker}
	I0120 13:08:25.383856 1941286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:08:25.404697 1941286 kubeconfig.go:125] found "ha-794751" server: "https://192.168.39.254:8443"
	I0120 13:08:25.404735 1941286 api_server.go:166] Checking apiserver status ...
	I0120 13:08:25.404783 1941286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 13:08:25.422160 1941286 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1448/cgroup
	W0120 13:08:25.434998 1941286 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1448/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0120 13:08:25.435054 1941286 ssh_runner.go:195] Run: ls
	I0120 13:08:25.440415 1941286 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0120 13:08:25.445372 1941286 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0120 13:08:25.445399 1941286 status.go:463] ha-794751-m03 apiserver status = Running (err=<nil>)
	I0120 13:08:25.445407 1941286 status.go:176] ha-794751-m03 status: &{Name:ha-794751-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:08:25.445423 1941286 status.go:174] checking status of ha-794751-m04 ...
	I0120 13:08:25.445722 1941286 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:08:25.445772 1941286 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:08:25.462978 1941286 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36281
	I0120 13:08:25.463569 1941286 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:08:25.464196 1941286 main.go:141] libmachine: Using API Version  1
	I0120 13:08:25.464216 1941286 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:08:25.464570 1941286 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:08:25.464794 1941286 main.go:141] libmachine: (ha-794751-m04) Calling .GetState
	I0120 13:08:25.466332 1941286 status.go:371] ha-794751-m04 host status = "Running" (err=<nil>)
	I0120 13:08:25.466349 1941286 host.go:66] Checking if "ha-794751-m04" exists ...
	I0120 13:08:25.466662 1941286 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:08:25.466702 1941286 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:08:25.483633 1941286 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I0120 13:08:25.484118 1941286 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:08:25.484575 1941286 main.go:141] libmachine: Using API Version  1
	I0120 13:08:25.484596 1941286 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:08:25.484923 1941286 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:08:25.485180 1941286 main.go:141] libmachine: (ha-794751-m04) Calling .GetIP
	I0120 13:08:25.488366 1941286 main.go:141] libmachine: (ha-794751-m04) DBG | domain ha-794751-m04 has defined MAC address 52:54:00:2b:46:a7 in network mk-ha-794751
	I0120 13:08:25.488804 1941286 main.go:141] libmachine: (ha-794751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:a7", ip: ""} in network mk-ha-794751: {Iface:virbr1 ExpiryTime:2025-01-20 14:05:56 +0000 UTC Type:0 Mac:52:54:00:2b:46:a7 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-794751-m04 Clientid:01:52:54:00:2b:46:a7}
	I0120 13:08:25.488849 1941286 main.go:141] libmachine: (ha-794751-m04) DBG | domain ha-794751-m04 has defined IP address 192.168.39.100 and MAC address 52:54:00:2b:46:a7 in network mk-ha-794751
	I0120 13:08:25.488978 1941286 host.go:66] Checking if "ha-794751-m04" exists ...
	I0120 13:08:25.489425 1941286 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:08:25.489473 1941286 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:08:25.505834 1941286 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I0120 13:08:25.506376 1941286 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:08:25.506950 1941286 main.go:141] libmachine: Using API Version  1
	I0120 13:08:25.507001 1941286 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:08:25.507357 1941286 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:08:25.507598 1941286 main.go:141] libmachine: (ha-794751-m04) Calling .DriverName
	I0120 13:08:25.507791 1941286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 13:08:25.507813 1941286 main.go:141] libmachine: (ha-794751-m04) Calling .GetSSHHostname
	I0120 13:08:25.510690 1941286 main.go:141] libmachine: (ha-794751-m04) DBG | domain ha-794751-m04 has defined MAC address 52:54:00:2b:46:a7 in network mk-ha-794751
	I0120 13:08:25.511200 1941286 main.go:141] libmachine: (ha-794751-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:46:a7", ip: ""} in network mk-ha-794751: {Iface:virbr1 ExpiryTime:2025-01-20 14:05:56 +0000 UTC Type:0 Mac:52:54:00:2b:46:a7 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-794751-m04 Clientid:01:52:54:00:2b:46:a7}
	I0120 13:08:25.511251 1941286 main.go:141] libmachine: (ha-794751-m04) DBG | domain ha-794751-m04 has defined IP address 192.168.39.100 and MAC address 52:54:00:2b:46:a7 in network mk-ha-794751
	I0120 13:08:25.511391 1941286 main.go:141] libmachine: (ha-794751-m04) Calling .GetSSHPort
	I0120 13:08:25.511566 1941286 main.go:141] libmachine: (ha-794751-m04) Calling .GetSSHKeyPath
	I0120 13:08:25.511740 1941286 main.go:141] libmachine: (ha-794751-m04) Calling .GetSSHUsername
	I0120 13:08:25.511920 1941286 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/ha-794751-m04/id_rsa Username:docker}
	I0120 13:08:25.599373 1941286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:08:25.615571 1941286 status.go:176] ha-794751-m04 status: &{Name:ha-794751-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (52.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 node start m02 -v=7 --alsologtostderr
E0120 13:09:10.418783 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-794751 node start m02 -v=7 --alsologtostderr: (51.222325078s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (52.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (445.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-794751 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-794751 -v=7 --alsologtostderr
E0120 13:11:26.557101 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:11:54.260703 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:12:36.597334 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-794751 -v=7 --alsologtostderr: (4m34.426442677s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-794751 --wait=true -v=7 --alsologtostderr
E0120 13:13:59.674923 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:16:26.557484 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-794751 --wait=true -v=7 --alsologtostderr: (2m51.287210105s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-794751
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (445.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-794751 node delete m03 -v=7 --alsologtostderr: (17.566977996s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 stop -v=7 --alsologtostderr
E0120 13:17:36.597988 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:21:26.556643 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-794751 stop -v=7 --alsologtostderr: (4m32.284011995s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-794751 status -v=7 --alsologtostderr: exit status 7 (115.976274ms)

                                                
                                                
-- stdout --
	ha-794751
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-794751-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-794751-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:21:36.570716 1945943 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:21:36.570833 1945943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:21:36.570841 1945943 out.go:358] Setting ErrFile to fd 2...
	I0120 13:21:36.570845 1945943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:21:36.571025 1945943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 13:21:36.571221 1945943 out.go:352] Setting JSON to false
	I0120 13:21:36.571265 1945943 mustload.go:65] Loading cluster: ha-794751
	I0120 13:21:36.571372 1945943 notify.go:220] Checking for updates...
	I0120 13:21:36.571706 1945943 config.go:182] Loaded profile config "ha-794751": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 13:21:36.571730 1945943 status.go:174] checking status of ha-794751 ...
	I0120 13:21:36.572209 1945943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:21:36.572254 1945943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:21:36.591812 1945943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41079
	I0120 13:21:36.592275 1945943 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:21:36.592939 1945943 main.go:141] libmachine: Using API Version  1
	I0120 13:21:36.592968 1945943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:21:36.593265 1945943 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:21:36.593443 1945943 main.go:141] libmachine: (ha-794751) Calling .GetState
	I0120 13:21:36.595033 1945943 status.go:371] ha-794751 host status = "Stopped" (err=<nil>)
	I0120 13:21:36.595051 1945943 status.go:384] host is not running, skipping remaining checks
	I0120 13:21:36.595058 1945943 status.go:176] ha-794751 status: &{Name:ha-794751 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:21:36.595101 1945943 status.go:174] checking status of ha-794751-m02 ...
	I0120 13:21:36.595408 1945943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:21:36.595455 1945943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:21:36.610917 1945943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44131
	I0120 13:21:36.611405 1945943 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:21:36.611911 1945943 main.go:141] libmachine: Using API Version  1
	I0120 13:21:36.611936 1945943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:21:36.612290 1945943 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:21:36.612515 1945943 main.go:141] libmachine: (ha-794751-m02) Calling .GetState
	I0120 13:21:36.614194 1945943 status.go:371] ha-794751-m02 host status = "Stopped" (err=<nil>)
	I0120 13:21:36.614213 1945943 status.go:384] host is not running, skipping remaining checks
	I0120 13:21:36.614218 1945943 status.go:176] ha-794751-m02 status: &{Name:ha-794751-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:21:36.614278 1945943 status.go:174] checking status of ha-794751-m04 ...
	I0120 13:21:36.614565 1945943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:21:36.614597 1945943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:21:36.629994 1945943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43601
	I0120 13:21:36.630589 1945943 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:21:36.631104 1945943 main.go:141] libmachine: Using API Version  1
	I0120 13:21:36.631131 1945943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:21:36.631438 1945943 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:21:36.631628 1945943 main.go:141] libmachine: (ha-794751-m04) Calling .GetState
	I0120 13:21:36.633353 1945943 status.go:371] ha-794751-m04 host status = "Stopped" (err=<nil>)
	I0120 13:21:36.633366 1945943 status.go:384] host is not running, skipping remaining checks
	I0120 13:21:36.633372 1945943 status.go:176] ha-794751-m04 status: &{Name:ha-794751-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (145.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-794751 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0120 13:22:36.597649 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:22:49.622662 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-794751 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m24.342261428s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (145.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-794751 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-794751 --control-plane -v=7 --alsologtostderr: (1m20.872359359s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-794751 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.47s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-563095 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0120 13:26:26.558722 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-563095 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (59.469069579s)
--- PASS: TestJSONOutput/start/Command (59.47s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-563095 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-563095 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-563095 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-563095 --output=json --user=testUser: (7.369690876s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-403750 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-403750 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (72.341336ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"03558343-cae7-4a60-8d76-f39ccfbb3c35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-403750] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a95cb704-dc7f-48d8-9845-a64b2a5d2787","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20242"}}
	{"specversion":"1.0","id":"6afb5edf-458c-4045-8873-ff3432d6efe0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2ad2f10e-8921-4733-ae77-437cc662aa01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig"}}
	{"specversion":"1.0","id":"3c8e0661-1506-43bb-a33d-1bd139228836","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube"}}
	{"specversion":"1.0","id":"16386430-1c13-45a8-a227-e6aaf16ee8bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8e1c8102-3151-43ad-b167-107c6da7f759","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2f11da1e-3154-4f6a-97f1-1b50782dfbf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-403750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-403750
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (92.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-005664 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-005664 --driver=kvm2  --container-runtime=crio: (45.909113324s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-020497 --driver=kvm2  --container-runtime=crio
E0120 13:27:36.597445 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-020497 --driver=kvm2  --container-runtime=crio: (43.925452363s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-005664
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-020497
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-020497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-020497
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-020497: (1.030078956s)
helpers_test.go:175: Cleaning up "first-005664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-005664
--- PASS: TestMinikubeProfile (92.83s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-379402 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-379402 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.465706516s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-379402 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-379402 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-395819 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-395819 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.981507308s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-395819 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-395819 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-379402 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-395819 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-395819 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-395819
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-395819: (1.294113217s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-395819
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-395819: (22.412076466s)
--- PASS: TestMountStart/serial/RestartStopped (23.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-395819 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-395819 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-916430 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0120 13:30:39.677065 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:31:26.556915 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-916430 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m53.870291288s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916430 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916430 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-916430 -- rollout status deployment/busybox: (2.753747711s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916430 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916430 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916430 -- exec busybox-58667487b6-8pz47 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916430 -- exec busybox-58667487b6-knw8t -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916430 -- exec busybox-58667487b6-8pz47 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916430 -- exec busybox-58667487b6-knw8t -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916430 -- exec busybox-58667487b6-8pz47 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916430 -- exec busybox-58667487b6-knw8t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916430 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916430 -- exec busybox-58667487b6-8pz47 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916430 -- exec busybox-58667487b6-8pz47 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916430 -- exec busybox-58667487b6-knw8t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-916430 -- exec busybox-58667487b6-knw8t -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-916430 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-916430 -v 3 --alsologtostderr: (52.226785465s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.85s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-916430 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 cp testdata/cp-test.txt multinode-916430:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 cp multinode-916430:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3816961536/001/cp-test_multinode-916430.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 cp multinode-916430:/home/docker/cp-test.txt multinode-916430-m02:/home/docker/cp-test_multinode-916430_multinode-916430-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430-m02 "sudo cat /home/docker/cp-test_multinode-916430_multinode-916430-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 cp multinode-916430:/home/docker/cp-test.txt multinode-916430-m03:/home/docker/cp-test_multinode-916430_multinode-916430-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430-m03 "sudo cat /home/docker/cp-test_multinode-916430_multinode-916430-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 cp testdata/cp-test.txt multinode-916430-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 cp multinode-916430-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3816961536/001/cp-test_multinode-916430-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 cp multinode-916430-m02:/home/docker/cp-test.txt multinode-916430:/home/docker/cp-test_multinode-916430-m02_multinode-916430.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430 "sudo cat /home/docker/cp-test_multinode-916430-m02_multinode-916430.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 cp multinode-916430-m02:/home/docker/cp-test.txt multinode-916430-m03:/home/docker/cp-test_multinode-916430-m02_multinode-916430-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430-m03 "sudo cat /home/docker/cp-test_multinode-916430-m02_multinode-916430-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 cp testdata/cp-test.txt multinode-916430-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 cp multinode-916430-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3816961536/001/cp-test_multinode-916430-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 cp multinode-916430-m03:/home/docker/cp-test.txt multinode-916430:/home/docker/cp-test_multinode-916430-m03_multinode-916430.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430 "sudo cat /home/docker/cp-test_multinode-916430-m03_multinode-916430.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 cp multinode-916430-m03:/home/docker/cp-test.txt multinode-916430-m02:/home/docker/cp-test_multinode-916430-m03_multinode-916430-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 ssh -n multinode-916430-m02 "sudo cat /home/docker/cp-test_multinode-916430-m03_multinode-916430-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 node stop m03
E0120 13:32:36.597378 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-916430 node stop m03: (2.301875548s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-916430 status: exit status 7 (454.1574ms)

                                                
                                                
-- stdout --
	multinode-916430
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-916430-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-916430-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-916430 status --alsologtostderr: exit status 7 (445.889476ms)

                                                
                                                
-- stdout --
	multinode-916430
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-916430-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-916430-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:32:38.473442 1953847 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:32:38.473665 1953847 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:32:38.473672 1953847 out.go:358] Setting ErrFile to fd 2...
	I0120 13:32:38.473677 1953847 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:32:38.473858 1953847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 13:32:38.474027 1953847 out.go:352] Setting JSON to false
	I0120 13:32:38.474059 1953847 mustload.go:65] Loading cluster: multinode-916430
	I0120 13:32:38.474172 1953847 notify.go:220] Checking for updates...
	I0120 13:32:38.474441 1953847 config.go:182] Loaded profile config "multinode-916430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 13:32:38.474465 1953847 status.go:174] checking status of multinode-916430 ...
	I0120 13:32:38.474942 1953847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:32:38.475000 1953847 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:32:38.498245 1953847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46815
	I0120 13:32:38.498770 1953847 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:32:38.499356 1953847 main.go:141] libmachine: Using API Version  1
	I0120 13:32:38.499381 1953847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:32:38.499715 1953847 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:32:38.499969 1953847 main.go:141] libmachine: (multinode-916430) Calling .GetState
	I0120 13:32:38.502059 1953847 status.go:371] multinode-916430 host status = "Running" (err=<nil>)
	I0120 13:32:38.502088 1953847 host.go:66] Checking if "multinode-916430" exists ...
	I0120 13:32:38.502507 1953847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:32:38.502560 1953847 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:32:38.518757 1953847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42635
	I0120 13:32:38.519243 1953847 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:32:38.519733 1953847 main.go:141] libmachine: Using API Version  1
	I0120 13:32:38.519759 1953847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:32:38.520126 1953847 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:32:38.520365 1953847 main.go:141] libmachine: (multinode-916430) Calling .GetIP
	I0120 13:32:38.523536 1953847 main.go:141] libmachine: (multinode-916430) DBG | domain multinode-916430 has defined MAC address 52:54:00:6b:0a:38 in network mk-multinode-916430
	I0120 13:32:38.524044 1953847 main.go:141] libmachine: (multinode-916430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0a:38", ip: ""} in network mk-multinode-916430: {Iface:virbr1 ExpiryTime:2025-01-20 14:29:50 +0000 UTC Type:0 Mac:52:54:00:6b:0a:38 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:multinode-916430 Clientid:01:52:54:00:6b:0a:38}
	I0120 13:32:38.524071 1953847 main.go:141] libmachine: (multinode-916430) DBG | domain multinode-916430 has defined IP address 192.168.39.37 and MAC address 52:54:00:6b:0a:38 in network mk-multinode-916430
	I0120 13:32:38.524241 1953847 host.go:66] Checking if "multinode-916430" exists ...
	I0120 13:32:38.524610 1953847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:32:38.524658 1953847 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:32:38.540647 1953847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39597
	I0120 13:32:38.541170 1953847 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:32:38.541761 1953847 main.go:141] libmachine: Using API Version  1
	I0120 13:32:38.541790 1953847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:32:38.542143 1953847 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:32:38.542340 1953847 main.go:141] libmachine: (multinode-916430) Calling .DriverName
	I0120 13:32:38.542498 1953847 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 13:32:38.542517 1953847 main.go:141] libmachine: (multinode-916430) Calling .GetSSHHostname
	I0120 13:32:38.545499 1953847 main.go:141] libmachine: (multinode-916430) DBG | domain multinode-916430 has defined MAC address 52:54:00:6b:0a:38 in network mk-multinode-916430
	I0120 13:32:38.545975 1953847 main.go:141] libmachine: (multinode-916430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:0a:38", ip: ""} in network mk-multinode-916430: {Iface:virbr1 ExpiryTime:2025-01-20 14:29:50 +0000 UTC Type:0 Mac:52:54:00:6b:0a:38 Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:multinode-916430 Clientid:01:52:54:00:6b:0a:38}
	I0120 13:32:38.546008 1953847 main.go:141] libmachine: (multinode-916430) DBG | domain multinode-916430 has defined IP address 192.168.39.37 and MAC address 52:54:00:6b:0a:38 in network mk-multinode-916430
	I0120 13:32:38.546138 1953847 main.go:141] libmachine: (multinode-916430) Calling .GetSSHPort
	I0120 13:32:38.546324 1953847 main.go:141] libmachine: (multinode-916430) Calling .GetSSHKeyPath
	I0120 13:32:38.546481 1953847 main.go:141] libmachine: (multinode-916430) Calling .GetSSHUsername
	I0120 13:32:38.546638 1953847 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/multinode-916430/id_rsa Username:docker}
	I0120 13:32:38.623015 1953847 ssh_runner.go:195] Run: systemctl --version
	I0120 13:32:38.629694 1953847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:32:38.645990 1953847 kubeconfig.go:125] found "multinode-916430" server: "https://192.168.39.37:8443"
	I0120 13:32:38.646051 1953847 api_server.go:166] Checking apiserver status ...
	I0120 13:32:38.646091 1953847 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 13:32:38.662999 1953847 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1064/cgroup
	W0120 13:32:38.675240 1953847 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1064/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0120 13:32:38.675336 1953847 ssh_runner.go:195] Run: ls
	I0120 13:32:38.680404 1953847 api_server.go:253] Checking apiserver healthz at https://192.168.39.37:8443/healthz ...
	I0120 13:32:38.686183 1953847 api_server.go:279] https://192.168.39.37:8443/healthz returned 200:
	ok
	I0120 13:32:38.686218 1953847 status.go:463] multinode-916430 apiserver status = Running (err=<nil>)
	I0120 13:32:38.686230 1953847 status.go:176] multinode-916430 status: &{Name:multinode-916430 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:32:38.686259 1953847 status.go:174] checking status of multinode-916430-m02 ...
	I0120 13:32:38.686632 1953847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:32:38.686685 1953847 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:32:38.703729 1953847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
	I0120 13:32:38.704266 1953847 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:32:38.704858 1953847 main.go:141] libmachine: Using API Version  1
	I0120 13:32:38.704886 1953847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:32:38.705268 1953847 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:32:38.705482 1953847 main.go:141] libmachine: (multinode-916430-m02) Calling .GetState
	I0120 13:32:38.707092 1953847 status.go:371] multinode-916430-m02 host status = "Running" (err=<nil>)
	I0120 13:32:38.707110 1953847 host.go:66] Checking if "multinode-916430-m02" exists ...
	I0120 13:32:38.707409 1953847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:32:38.707451 1953847 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:32:38.725040 1953847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I0120 13:32:38.725584 1953847 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:32:38.726331 1953847 main.go:141] libmachine: Using API Version  1
	I0120 13:32:38.726356 1953847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:32:38.726734 1953847 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:32:38.726943 1953847 main.go:141] libmachine: (multinode-916430-m02) Calling .GetIP
	I0120 13:32:38.729599 1953847 main.go:141] libmachine: (multinode-916430-m02) DBG | domain multinode-916430-m02 has defined MAC address 52:54:00:a4:04:d1 in network mk-multinode-916430
	I0120 13:32:38.730067 1953847 main.go:141] libmachine: (multinode-916430-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:04:d1", ip: ""} in network mk-multinode-916430: {Iface:virbr1 ExpiryTime:2025-01-20 14:30:52 +0000 UTC Type:0 Mac:52:54:00:a4:04:d1 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-916430-m02 Clientid:01:52:54:00:a4:04:d1}
	I0120 13:32:38.730111 1953847 main.go:141] libmachine: (multinode-916430-m02) DBG | domain multinode-916430-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:a4:04:d1 in network mk-multinode-916430
	I0120 13:32:38.730258 1953847 host.go:66] Checking if "multinode-916430-m02" exists ...
	I0120 13:32:38.730572 1953847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:32:38.730637 1953847 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:32:38.746628 1953847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39593
	I0120 13:32:38.747139 1953847 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:32:38.747697 1953847 main.go:141] libmachine: Using API Version  1
	I0120 13:32:38.747716 1953847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:32:38.747997 1953847 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:32:38.748190 1953847 main.go:141] libmachine: (multinode-916430-m02) Calling .DriverName
	I0120 13:32:38.748428 1953847 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 13:32:38.748457 1953847 main.go:141] libmachine: (multinode-916430-m02) Calling .GetSSHHostname
	I0120 13:32:38.751790 1953847 main.go:141] libmachine: (multinode-916430-m02) DBG | domain multinode-916430-m02 has defined MAC address 52:54:00:a4:04:d1 in network mk-multinode-916430
	I0120 13:32:38.752197 1953847 main.go:141] libmachine: (multinode-916430-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:04:d1", ip: ""} in network mk-multinode-916430: {Iface:virbr1 ExpiryTime:2025-01-20 14:30:52 +0000 UTC Type:0 Mac:52:54:00:a4:04:d1 Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:multinode-916430-m02 Clientid:01:52:54:00:a4:04:d1}
	I0120 13:32:38.752247 1953847 main.go:141] libmachine: (multinode-916430-m02) DBG | domain multinode-916430-m02 has defined IP address 192.168.39.157 and MAC address 52:54:00:a4:04:d1 in network mk-multinode-916430
	I0120 13:32:38.752374 1953847 main.go:141] libmachine: (multinode-916430-m02) Calling .GetSSHPort
	I0120 13:32:38.752557 1953847 main.go:141] libmachine: (multinode-916430-m02) Calling .GetSSHKeyPath
	I0120 13:32:38.752714 1953847 main.go:141] libmachine: (multinode-916430-m02) Calling .GetSSHUsername
	I0120 13:32:38.752857 1953847 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20242-1920423/.minikube/machines/multinode-916430-m02/id_rsa Username:docker}
	I0120 13:32:38.830697 1953847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 13:32:38.848510 1953847 status.go:176] multinode-916430-m02 status: &{Name:multinode-916430-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:32:38.848560 1953847 status.go:174] checking status of multinode-916430-m03 ...
	I0120 13:32:38.848984 1953847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:32:38.849040 1953847 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:32:38.865519 1953847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44581
	I0120 13:32:38.866106 1953847 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:32:38.866669 1953847 main.go:141] libmachine: Using API Version  1
	I0120 13:32:38.866697 1953847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:32:38.867148 1953847 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:32:38.867411 1953847 main.go:141] libmachine: (multinode-916430-m03) Calling .GetState
	I0120 13:32:38.869176 1953847 status.go:371] multinode-916430-m03 host status = "Stopped" (err=<nil>)
	I0120 13:32:38.869197 1953847 status.go:384] host is not running, skipping remaining checks
	I0120 13:32:38.869205 1953847 status.go:176] multinode-916430-m03 status: &{Name:multinode-916430-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-916430 node start m03 -v=7 --alsologtostderr: (39.419016064s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (344.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-916430
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-916430
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-916430: (3m3.976202677s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-916430 --wait=true -v=8 --alsologtostderr
E0120 13:36:26.557367 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:37:36.597208 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-916430 --wait=true -v=8 --alsologtostderr: (2m40.011874968s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-916430
--- PASS: TestMultiNode/serial/RestartKeepsNodes (344.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-916430 node delete m03: (2.286310779s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 stop
E0120 13:39:29.626352 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:41:26.566145 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-916430 stop: (3m1.968165282s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-916430 status: exit status 7 (97.956231ms)

                                                
                                                
-- stdout --
	multinode-916430
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-916430-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-916430 status --alsologtostderr: exit status 7 (94.699788ms)

                                                
                                                
-- stdout --
	multinode-916430
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-916430-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:42:08.023281 1956860 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:42:08.023405 1956860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:42:08.023415 1956860 out.go:358] Setting ErrFile to fd 2...
	I0120 13:42:08.023418 1956860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:42:08.023610 1956860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 13:42:08.023783 1956860 out.go:352] Setting JSON to false
	I0120 13:42:08.023819 1956860 mustload.go:65] Loading cluster: multinode-916430
	I0120 13:42:08.023916 1956860 notify.go:220] Checking for updates...
	I0120 13:42:08.024242 1956860 config.go:182] Loaded profile config "multinode-916430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 13:42:08.024261 1956860 status.go:174] checking status of multinode-916430 ...
	I0120 13:42:08.024664 1956860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:42:08.024710 1956860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:42:08.043684 1956860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36905
	I0120 13:42:08.044306 1956860 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:42:08.044973 1956860 main.go:141] libmachine: Using API Version  1
	I0120 13:42:08.044990 1956860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:42:08.045405 1956860 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:42:08.045661 1956860 main.go:141] libmachine: (multinode-916430) Calling .GetState
	I0120 13:42:08.047432 1956860 status.go:371] multinode-916430 host status = "Stopped" (err=<nil>)
	I0120 13:42:08.047459 1956860 status.go:384] host is not running, skipping remaining checks
	I0120 13:42:08.047466 1956860 status.go:176] multinode-916430 status: &{Name:multinode-916430 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 13:42:08.047508 1956860 status.go:174] checking status of multinode-916430-m02 ...
	I0120 13:42:08.047835 1956860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0120 13:42:08.047893 1956860 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0120 13:42:08.063795 1956860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I0120 13:42:08.064244 1956860 main.go:141] libmachine: () Calling .GetVersion
	I0120 13:42:08.064756 1956860 main.go:141] libmachine: Using API Version  1
	I0120 13:42:08.064775 1956860 main.go:141] libmachine: () Calling .SetConfigRaw
	I0120 13:42:08.065138 1956860 main.go:141] libmachine: () Calling .GetMachineName
	I0120 13:42:08.065384 1956860 main.go:141] libmachine: (multinode-916430-m02) Calling .GetState
	I0120 13:42:08.067292 1956860 status.go:371] multinode-916430-m02 host status = "Stopped" (err=<nil>)
	I0120 13:42:08.067309 1956860 status.go:384] host is not running, skipping remaining checks
	I0120 13:42:08.067344 1956860 status.go:176] multinode-916430-m02 status: &{Name:multinode-916430-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (113.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-916430 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0120 13:42:36.597599 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-916430 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m53.308738165s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-916430 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (113.87s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-916430
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-916430-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-916430-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (73.96472ms)

                                                
                                                
-- stdout --
	* [multinode-916430-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-916430-m02' is duplicated with machine name 'multinode-916430-m02' in profile 'multinode-916430'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-916430-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-916430-m03 --driver=kvm2  --container-runtime=crio: (45.251275992s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-916430
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-916430: exit status 80 (228.899678ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-916430 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-916430-m03 already exists in multinode-916430-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-916430-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-916430-m03: (1.051556585s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.66s)

                                                
                                    
x
+
TestScheduledStopUnix (116.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-821532 --memory=2048 --driver=kvm2  --container-runtime=crio
E0120 13:47:36.597042 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-821532 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.245494392s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-821532 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-821532 -n scheduled-stop-821532
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-821532 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0120 13:48:20.483973 1927672 retry.go:31] will retry after 126.641µs: open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/scheduled-stop-821532/pid: no such file or directory
I0120 13:48:20.485119 1927672 retry.go:31] will retry after 175.773µs: open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/scheduled-stop-821532/pid: no such file or directory
I0120 13:48:20.486223 1927672 retry.go:31] will retry after 317.441µs: open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/scheduled-stop-821532/pid: no such file or directory
I0120 13:48:20.487388 1927672 retry.go:31] will retry after 363.27µs: open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/scheduled-stop-821532/pid: no such file or directory
I0120 13:48:20.488528 1927672 retry.go:31] will retry after 451.754µs: open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/scheduled-stop-821532/pid: no such file or directory
I0120 13:48:20.489657 1927672 retry.go:31] will retry after 747.452µs: open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/scheduled-stop-821532/pid: no such file or directory
I0120 13:48:20.490778 1927672 retry.go:31] will retry after 890.776µs: open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/scheduled-stop-821532/pid: no such file or directory
I0120 13:48:20.491932 1927672 retry.go:31] will retry after 1.477801ms: open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/scheduled-stop-821532/pid: no such file or directory
I0120 13:48:20.494174 1927672 retry.go:31] will retry after 3.124811ms: open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/scheduled-stop-821532/pid: no such file or directory
I0120 13:48:20.498425 1927672 retry.go:31] will retry after 4.856556ms: open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/scheduled-stop-821532/pid: no such file or directory
I0120 13:48:20.503689 1927672 retry.go:31] will retry after 8.027692ms: open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/scheduled-stop-821532/pid: no such file or directory
I0120 13:48:20.511906 1927672 retry.go:31] will retry after 11.353251ms: open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/scheduled-stop-821532/pid: no such file or directory
I0120 13:48:20.524189 1927672 retry.go:31] will retry after 19.163389ms: open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/scheduled-stop-821532/pid: no such file or directory
I0120 13:48:20.544517 1927672 retry.go:31] will retry after 21.060798ms: open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/scheduled-stop-821532/pid: no such file or directory
I0120 13:48:20.565764 1927672 retry.go:31] will retry after 40.085957ms: open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/scheduled-stop-821532/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-821532 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-821532 -n scheduled-stop-821532
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-821532
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-821532 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-821532
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-821532: exit status 7 (76.434259ms)

                                                
                                                
-- stdout --
	scheduled-stop-821532
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-821532 -n scheduled-stop-821532
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-821532 -n scheduled-stop-821532: exit status 7 (78.038339ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-821532" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-821532
--- PASS: TestScheduledStopUnix (116.02s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (214.9s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.571552888 start -p running-upgrade-934502 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.571552888 start -p running-upgrade-934502 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m6.776067488s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-934502 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-934502 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m26.276061246s)
helpers_test.go:175: Cleaning up "running-upgrade-934502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-934502
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-934502: (1.264364971s)
--- PASS: TestRunningBinaryUpgrade (214.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-926915 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-926915 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (85.809793ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-926915] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestPause/serial/Start (108.23s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-324820 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-324820 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m48.232920589s)
--- PASS: TestPause/serial/Start (108.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-926915 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-926915 --driver=kvm2  --container-runtime=crio: (1m36.289835528s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-926915 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (44.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-926915 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-926915 --no-kubernetes --driver=kvm2  --container-runtime=crio: (42.568841076s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-926915 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-926915 status -o json: exit status 2 (317.457514ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-926915","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-926915
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-926915: (1.147490522s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (44.03s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (62.88s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-324820 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0120 13:51:26.556468 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-324820 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.851618438s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (62.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (32.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-926915 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-926915 --no-kubernetes --driver=kvm2  --container-runtime=crio: (32.70721225s)
--- PASS: TestNoKubernetes/serial/Start (32.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-798303 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-798303 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (123.950143ms)

                                                
                                                
-- stdout --
	* [false-798303] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 13:52:04.571705 1963087 out.go:345] Setting OutFile to fd 1 ...
	I0120 13:52:04.571849 1963087 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:52:04.571858 1963087 out.go:358] Setting ErrFile to fd 2...
	I0120 13:52:04.571864 1963087 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 13:52:04.572187 1963087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20242-1920423/.minikube/bin
	I0120 13:52:04.573012 1963087 out.go:352] Setting JSON to false
	I0120 13:52:04.574531 1963087 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":20071,"bootTime":1737361054,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 13:52:04.574653 1963087 start.go:139] virtualization: kvm guest
	I0120 13:52:04.577018 1963087 out.go:177] * [false-798303] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 13:52:04.578297 1963087 notify.go:220] Checking for updates...
	I0120 13:52:04.578320 1963087 out.go:177]   - MINIKUBE_LOCATION=20242
	I0120 13:52:04.579633 1963087 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 13:52:04.581032 1963087 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20242-1920423/kubeconfig
	I0120 13:52:04.582419 1963087 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20242-1920423/.minikube
	I0120 13:52:04.583850 1963087 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 13:52:04.587062 1963087 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 13:52:04.588976 1963087 config.go:182] Loaded profile config "NoKubernetes-926915": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0120 13:52:04.589180 1963087 config.go:182] Loaded profile config "pause-324820": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0120 13:52:04.589308 1963087 config.go:182] Loaded profile config "running-upgrade-934502": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0120 13:52:04.589440 1963087 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 13:52:04.630780 1963087 out.go:177] * Using the kvm2 driver based on user configuration
	I0120 13:52:04.632002 1963087 start.go:297] selected driver: kvm2
	I0120 13:52:04.632028 1963087 start.go:901] validating driver "kvm2" against <nil>
	I0120 13:52:04.632046 1963087 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 13:52:04.634229 1963087 out.go:201] 
	W0120 13:52:04.635406 1963087 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0120 13:52:04.636567 1963087 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-798303 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-798303

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-798303

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-798303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-798303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-798303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-798303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-798303

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-798303

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-798303

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-798303

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-798303

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-798303" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-798303" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 13:50:45 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.188:8443
name: pause-324820
contexts:
- context:
cluster: pause-324820
extensions:
- extension:
last-update: Mon, 20 Jan 2025 13:50:45 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-324820
name: pause-324820
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-324820
user:
client-certificate: /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/pause-324820/client.crt
client-key: /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/pause-324820/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-798303

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798303"

                                                
                                                
----------------------- debugLogs end: false-798303 [took: 3.613717475s] --------------------------------
helpers_test.go:175: Cleaning up "false-798303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-798303
--- PASS: TestNetworkPlugins/group/false (3.95s)

                                                
                                    
x
+
TestPause/serial/Pause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-324820 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.83s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-324820 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-324820 --output=json --layout=cluster: exit status 2 (284.532952ms)

                                                
                                                
-- stdout --
	{"Name":"pause-324820","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-324820","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-324820 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.75s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.04s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-324820 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-324820 --alsologtostderr -v=5: (1.041432453s)
--- PASS: TestPause/serial/PauseAgain (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-926915 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-926915 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.522133ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.09s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-324820 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-324820 --alsologtostderr -v=5: (1.091229144s)
--- PASS: TestPause/serial/DeletePaused (1.09s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.69s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0120 13:52:36.597414 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.68987486s)
--- PASS: TestPause/serial/VerifyDeletedResources (14.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (96.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.195783585 start -p stopped-upgrade-795137 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.195783585 start -p stopped-upgrade-795137 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (49.444181943s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.195783585 -p stopped-upgrade-795137 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.195783585 -p stopped-upgrade-795137 stop: (2.147875347s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-795137 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-795137 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.982209173s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (96.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-795137
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-795137: (1.010497108s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (99.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-648067 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E0120 13:56:09.628598 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
E0120 13:56:26.556991 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-648067 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m39.390672624s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (99.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (75.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-647109 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-647109 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m15.595029514s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (75.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-648067 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e90faea2-996f-4156-9948-f3d9b8f65a86] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e90faea2-996f-4156-9948-f3d9b8f65a86] Running
E0120 13:57:36.597091 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/addons-917221/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003499137s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-648067 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-648067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-648067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.06183735s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-648067 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-648067 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-648067 --alsologtostderr -v=3: (1m31.221814128s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-647109 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fffaa12b-f4f4-46e5-9885-067e2e1827f5] Pending
helpers_test.go:344: "busybox" [fffaa12b-f4f4-46e5-9885-067e2e1827f5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fffaa12b-f4f4-46e5-9885-067e2e1827f5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004527477s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-647109 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-727256 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-727256 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m23.179303555s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-647109 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-647109 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-647109 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-647109 --alsologtostderr -v=3: (1m31.572183955s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-648067 -n no-preload-648067
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-648067 -n no-preload-648067: exit status 7 (72.34499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-648067 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-727256 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [79f0a8e1-87f9-455d-9504-384893f84f68] Pending
helpers_test.go:344: "busybox" [79f0a8e1-87f9-455d-9504-384893f84f68] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [79f0a8e1-87f9-455d-9504-384893f84f68] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003807133s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-727256 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-727256 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-727256 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-727256 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-727256 --alsologtostderr -v=3: (1m31.098464426s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-647109 -n embed-certs-647109
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-647109 -n embed-certs-647109: exit status 7 (77.288087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-647109 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-191446 --alsologtostderr -v=3
E0120 14:01:26.557253 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-191446 --alsologtostderr -v=3: (3.310369548s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191446 -n old-k8s-version-191446
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-191446 -n old-k8s-version-191446: exit status 7 (86.028888ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-191446 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-727256 -n default-k8s-diff-port-727256
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-727256 -n default-k8s-diff-port-727256: exit status 7 (78.992463ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-727256 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (50.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-345509 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-345509 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (50.076074357s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (50.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-798303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-798303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m26.321020277s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-345509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-345509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.352300496s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-345509 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-345509 --alsologtostderr -v=3: (7.335207268s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-345509 -n newest-cni-345509
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-345509 -n newest-cni-345509: exit status 7 (96.109155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-345509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-345509 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E0120 14:26:26.557415 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/functional-038507/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-345509 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (38.0375945s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-345509 -n newest-cni-345509
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-345509 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-345509 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-345509 -n newest-cni-345509
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-345509 -n newest-cni-345509: exit status 2 (263.332667ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-345509 -n newest-cni-345509
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-345509 -n newest-cni-345509: exit status 2 (259.983361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-345509 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-345509 -n newest-cni-345509
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-345509 -n newest-cni-345509
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-798303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-798303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m4.539393306s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (104.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-798303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-798303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m44.04906622s)
--- PASS: TestNetworkPlugins/group/calico/Start (104.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-798303 "pgrep -a kubelet"
I0120 14:27:18.023190 1927672 config.go:182] Loaded profile config "auto-798303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-798303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7ncxj" [83c0e14a-a924-4a97-8739-3344af5f4ed5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7ncxj" [83c0e14a-a924-4a97-8739-3344af5f4ed5] Running
E0120 14:27:27.435726 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:27:27.442213 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:27:27.453626 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:27:27.475108 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:27:27.516627 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:27:27.598048 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:27:27.759672 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:27:28.081369 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:27:28.723671 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003753835s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-798303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-798303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-798303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (88.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-798303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0120 14:27:47.932312 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:28:08.414501 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-798303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m28.58223922s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (88.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xb9ph" [cb45af23-b7cd-40db-89cf-c5e0ee99c675] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00438554s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-798303 "pgrep -a kubelet"
I0120 14:28:16.432809 1927672 config.go:182] Loaded profile config "kindnet-798303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-798303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-9lgbf" [d8737388-c854-4480-899b-af81d6ca3560] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-9lgbf" [d8737388-c854-4480-899b-af81d6ca3560] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005435713s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-798303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-798303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-798303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (86.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-798303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-798303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m26.747954684s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (86.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (76.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-798303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0120 14:28:49.376475 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-798303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m16.547602354s)
--- PASS: TestNetworkPlugins/group/flannel/Start (76.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-n2wv5" [c80dd687-235a-4ad4-b11c-d27b7bddbd5c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005553442s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-798303 "pgrep -a kubelet"
I0120 14:29:00.065237 1927672 config.go:182] Loaded profile config "calico-798303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-798303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ggldb" [ff68f949-e944-45ef-b4bb-304f602c8f0e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ggldb" [ff68f949-e944-45ef-b4bb-304f602c8f0e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.005193576s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-798303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-798303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-798303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-798303 "pgrep -a kubelet"
I0120 14:29:16.231433 1927672 config.go:182] Loaded profile config "custom-flannel-798303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-798303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-t7z9m" [9f1e26fc-6448-4fc2-9a2d-284fd1e1aff5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-t7z9m" [9f1e26fc-6448-4fc2-9a2d-284fd1e1aff5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.006921671s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-798303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-798303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-798303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (59.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-798303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-798303 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (59.858332182s)
--- PASS: TestNetworkPlugins/group/bridge/Start (59.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-798303 "pgrep -a kubelet"
I0120 14:29:57.842150 1927672 config.go:182] Loaded profile config "enable-default-cni-798303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-798303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2m7x5" [279ba841-a652-4bc4-8033-cb97919b9830] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0120 14:29:58.569122 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:58.575558 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:58.587088 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:58.608566 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:58.650141 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:58.731690 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:58.893358 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:59.214733 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:59.342890 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:29:59.856430 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:30:01.138396 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-2m7x5" [279ba841-a652-4bc4-8033-cb97919b9830] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004782969s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-km94t" [cf994050-814d-42fa-af64-108c64b4cc4b] Running
E0120 14:30:03.700304 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt: no such file or directory" logger="UnhandledError"
E0120 14:30:08.822210 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005265311s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-798303 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-798303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-798303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-798303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
I0120 14:30:09.486399 1927672 config.go:182] Loaded profile config "flannel-798303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-798303 replace --force -f testdata/netcat-deployment.yaml
E0120 14:30:09.585055 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/default-k8s-diff-port-727256/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-t5cpp" [0fa7670a-efbd-499b-9bf6-33eac326893c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0120 14:30:11.297986 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/no-preload-648067/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-t5cpp" [0fa7670a-efbd-499b-9bf6-33eac326893c] Running
E0120 14:30:19.063556 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003807618s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-798303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-798303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-798303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-798303 "pgrep -a kubelet"
I0120 14:30:33.080973 1927672 config.go:182] Loaded profile config "bridge-798303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-798303 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-p42pc" [32d545d0-aabc-4246-a1e7-0242c3b80367] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-p42pc" [32d545d0-aabc-4246-a1e7-0242c3b80367] Running
E0120 14:30:39.545790 1927672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/old-k8s-version-191446/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004433171s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-798303 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-798303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-798303 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (39/311)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.0/cached-images 0
15 TestDownloadOnly/v1.32.0/binaries 0
16 TestDownloadOnly/v1.32.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.37
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
259 TestStartStop/group/disable-driver-mounts 0.16
269 TestNetworkPlugins/group/kubenet 3.63
277 TestNetworkPlugins/group/cilium 4.95
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.37s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-917221 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-955986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-955986
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-798303 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-798303

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-798303

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-798303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-798303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-798303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-798303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-798303

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-798303

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-798303

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-798303

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-798303

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-798303" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-798303" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 13:50:45 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.188:8443
name: pause-324820
contexts:
- context:
cluster: pause-324820
extensions:
- extension:
last-update: Mon, 20 Jan 2025 13:50:45 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-324820
name: pause-324820
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-324820
user:
client-certificate: /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/pause-324820/client.crt
client-key: /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/pause-324820/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-798303

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798303"

                                                
                                                
----------------------- debugLogs end: kubenet-798303 [took: 3.462917481s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-798303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-798303
--- SKIP: TestNetworkPlugins/group/kubenet (3.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-798303 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-798303

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-798303

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-798303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-798303

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-798303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-798303

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-798303

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-798303

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-798303

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-798303

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-798303

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-798303" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-798303

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-798303

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-798303

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-798303

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-798303" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-798303" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20242-1920423/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 13:50:45 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.188:8443
name: pause-324820
contexts:
- context:
cluster: pause-324820
extensions:
- extension:
last-update: Mon, 20 Jan 2025 13:50:45 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-324820
name: pause-324820
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-324820
user:
client-certificate: /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/pause-324820/client.crt
client-key: /home/jenkins/minikube-integration/20242-1920423/.minikube/profiles/pause-324820/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-798303

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-798303" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798303"

                                                
                                                
----------------------- debugLogs end: cilium-798303 [took: 4.462362686s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-798303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-798303
--- SKIP: TestNetworkPlugins/group/cilium (4.95s)

                                                
                                    
Copied to clipboard